Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Joint estimation of motion and illumination change in a sequence of images
NASA Astrophysics Data System (ADS)
Koo, Ja-Keoung; Kim, Hyo-Hun; Hong, Byung-Woo
2015-09-01
We present an algorithm that simultaneously computes optical flow and estimates illumination change from an image sequence in a unified framework. We propose an energy functional consisting of conventional optical flow energy based on Horn-Schunck method and an additional constraint that is designed to compensate for illumination changes. Any undesirable illumination change that occurs in the imaging procedure in a sequence while the optical flow is being computed is considered a nuisance factor. In contrast to the conventional optical flow algorithm based on Horn-Schunck functional, which assumes the brightness constancy constraint, our algorithm is shown to be robust with respect to temporal illumination changes in the computation of optical flows. An efficient conjugate gradient descent technique is used in the optimization procedure as a numerical scheme. The experimental results obtained from the Middlebury benchmark dataset demonstrate the robustness and the effectiveness of our algorithm. In addition, comparative analysis of our algorithm and Horn-Schunck algorithm is performed on the additional test dataset that is constructed by applying a variety of synthetic bias fields to the original image sequences in the Middlebury benchmark dataset in order to demonstrate that our algorithm outperforms the Horn-Schunck algorithm. The superior performance of the proposed method is observed in terms of both qualitative visualizations and quantitative accuracy errors when compared to Horn-Schunck optical flow algorithm that easily yields poor results in the presence of small illumination changes leading to violation of the brightness constancy constraint.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
Simulating nailfold capillaroscopy sequences to evaluate algorithms for blood flow estimation.
Tresadern, P A; Berks, M; Murray, A K; Dinsdale, G; Taylor, C J; Herrick, A L
2013-01-01
The effects of systemic sclerosis (SSc)--a disease of the connective tissue causing blood flow problems that can require amputation of the fingers--can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions.
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Simulating Nailfold Capillaroscopy Sequences to Evaluate Algorithms for Blood Flow Estimation
Tresadern, P. A.; Berks, M.; Murray, A. K.; Dinsdale, G.; Taylor, C. J.; Herrick, A. L.
2016-01-01
The effects of systemic sclerosis (SSc) – a disease of the connective tissue causing blood flow problems that can require amputation of the fingers – can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions. PMID:24110268
Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)
2001-01-01
An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.
Evaluation of Swift Start TCP in Long-Delay Environment
NASA Technical Reports Server (NTRS)
Lawas-Grodek, Frances J.; Tran, Diepchi T.
2004-01-01
This report presents the test results of the Swift Start algorithm in single-flow and multiple-flow testbeds under the effects of high propagation delays, various slow bottlenecks, and small queue sizes. Although this algorithm estimates capacity and implements packet pacing, the findings were that in a heavily congested link, the Swift Start algorithm will not be applicable. The reason is that the bottleneck estimation is falsely influenced by timeouts induced by retransmissions and the expiration of delayed acknowledgment (ACK) timers, thus causing the modified Swift Start code to fall back to regular transmission control protocol (TCP).
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Anomaly Detection in Test Equipment via Sliding Mode Observers
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Drakunov, Sergey V.
2012-01-01
Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
NASA Astrophysics Data System (ADS)
Sanford, Ward E.; Niel Plummer, L.; Casile, Gerolamo; Busenberg, Ed; Nelms, David L.; Schlosser, Peter
2017-06-01
Dual-domain transport is an alternative conceptual and mathematical paradigm to advection-dispersion for describing the movement of dissolved constituents in groundwater. Here we test the use of a dual-domain algorithm combined with advective pathline tracking to help reconcile environmental tracer concentrations measured in springs within the Shenandoah Valley, USA. The approach also allows for the estimation of the three dual-domain parameters: mobile porosity, immobile porosity, and a domain exchange rate constant. Concentrations of CFC-113, SF6, 3H, and 3He were measured at 28 springs emanating from carbonate rocks. The different tracers give three different mean composite piston-flow ages for all the springs that vary from 5 to 18 years. Here we compare four algorithms that interpret the tracer concentrations in terms of groundwater age: piston flow, old-fraction mixing, advective-flow path modeling, and dual-domain modeling. Whereas the second two algorithms made slight improvements over piston flow at reconciling the disparate piston-flow age estimates, the dual-domain algorithm gave a very marked improvement. Optimal values for the three transport parameters were also obtained, although the immobile porosity value was not well constrained. Parameter correlation and sensitivities were calculated to help quantify the uncertainty. Although some correlation exists between the three parameters being estimated, a watershed simulation of a pollutant breakthrough to a local stream illustrates that the estimated transport parameters can still substantially help to constrain and predict the nature and timing of solute transport. The combined use of multiple environmental tracers with this dual-domain approach could be applicable in a wide variety of fractured-rock settings.
Hirasawa, Ai; Kaneko, Takahito; Tanaka, Naoki; Funane, Tsukasa; Kiguchi, Masashi; Sørensen, Henrik; Secher, Niels H; Ogoh, Shigehiko
2016-04-01
We estimated cerebral oxygenation during handgrip exercise and a cognitive task using an algorithm that eliminates the influence of skin blood flow (SkBF) on the near-infrared spectroscopy (NIRS) signal. The algorithm involves a subtraction method to develop a correction factor for each subject. For twelve male volunteers (age 21 ± 1 yrs) +80 mmHg pressure was applied over the left temporal artery for 30 s by a custom-made headband cuff to calculate an individual correction factor. From the NIRS-determined ipsilateral cerebral oxyhemoglobin concentration (O2Hb) at two source-detector distances (15 and 30 mm) with the algorithm using the individual correction factor, we expressed cerebral oxygenation without influence from scalp and scull blood flow. Validity of the estimated cerebral oxygenation was verified during cerebral neural activation (handgrip exercise and cognitive task). With the use of both source-detector distances, handgrip exercise and a cognitive task increased O2Hb (P < 0.01) but O2Hb was reduced when SkBF became eliminated by pressure on the temporal artery for 5 s. However, when the estimation of cerebral oxygenation was based on the algorithm developed when pressure was applied to the temporal artery, estimated O2Hb was not affected by elimination of SkBF during handgrip exercise (P = 0.666) or the cognitive task (P = 0.105). These findings suggest that the algorithm with the individual correction factor allows for evaluation of changes in an accurate cerebral oxygenation without influence of extracranial blood flow by NIRS applied to the forehead.
Granegger, Marcus; Moscato, Francesco; Casas, Fernando; Wieselthaler, Georg; Schima, Heinrich
2012-08-01
Estimation of instantaneous flow in rotary blood pumps (RBPs) is important for monitoring the interaction between heart and pump and eventually the ventricular function. Our group has reported an algorithm to derive ventricular contractility based on the maximum time derivative (dQ/dt(max) as a substitute for ventricular dP/dt(max) ) and pulsatility of measured flow signals. However, in RBPs used clinically, flow is estimated with a bandwidth too low to determine dQ/dt(max) in the case of improving heart function. The aim of this study was to develop a flow estimator for a centrifugal pump with bandwidth sufficient to provide noninvasive cardiac diagnostics. The new estimator is based on both static and dynamic properties of the brushless DC motor. An in vitro setup was employed to identify the performance of pump and motor up to 20 Hz. The algorithm was validated using physiological ventricular and arterial pressure waveforms in a mock loop which simulated different contractilities (dP/dt(max) 600 to 2300 mm Hg/s), pump speeds (2 to 4 krpm), and fluid viscosities (2 to 4 mPa·s). The mathematically estimated pump flow data were then compared to the datasets measured in the mock loop for different variable combinations (flow ranging from 2.5 to 7 L/min, pulsatility from 3.5 to 6 L/min, dQ/dt(max) from 15 to 60 L/min/s). Transfer function analysis showed that the developed algorithm could estimate the flow waveform with a bandwidth up to 15 Hz (±2 dB). The mean difference between the estimated and measured average flows was +0.06 ± 0.31 L/min and for the flow pulsatilities -0.27 ± 0.2 L/min. Detection of dQ/dt(max) was possible up to a dP/dt(max) level of 2300 mm Hg/s. In conclusion, a flow estimator with sufficient frequency bandwidth and accuracy to allow determination of changes in ventricular contractility even in the case of improving heart function was developed. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro
We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.
Estimation of Blood Flow Rates in Large Microvascular Networks
Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.
2012-01-01
Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
An Algorithm for Pedestrian Detection in Multispectral Image Sequences
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.
2017-05-01
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
Thermal particle image velocity estimation of fire plume flow
Xiangyang Zhou; Lulu Sun; Shankar Mahalingam; David R. Weise
2003-01-01
For the purpose of studying wildfire spread in living vegetation such as chaparral in California, a thermal particle image velocity (TPIV) algorithm for nonintrusively measuring flame gas velocities through thermal infrared (IR) imagery was developed. By tracing thermal particles in successive digital IR images, the TPIV algorithm can estimate the velocity field in a...
Egomotion estimation with optic flow and air velocity sensors.
Rutkowski, Adam J; Miller, Mikel M; Quinn, Roger D; Willis, Mark A
2011-06-01
We develop a method that allows a flyer to estimate its own motion (egomotion), the wind velocity, ground slope, and flight height using only inputs from onboard optic flow and air velocity sensors. Our artificial algorithm demonstrates how it could be possible for flying insects to determine their absolute egomotion using their available sensors, namely their eyes and wind sensitive hairs and antennae. Although many behaviors can be performed by only knowing the direction of travel, behavioral experiments indicate that odor tracking insects are able to estimate the wind direction and control their absolute egomotion (i.e., groundspeed). The egomotion estimation method that we have developed, which we call the opto-aeronautic algorithm, is tested in a variety of wind and ground slope conditions using a video recorded flight of a moth tracking a pheromone plume. Over all test cases that we examined, the algorithm achieved a mean absolute error in height of 7% or less. Furthermore, our algorithm is suitable for the navigation of aerial vehicles in environments where signals from the Global Positioning System are unavailable.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
Variational optical flow estimation for images with spectral and photometric sensor diversity
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-03-01
Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.
A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow
NASA Astrophysics Data System (ADS)
Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun
2018-04-01
The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo
2008-06-01
In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.
Coupling reconstruction and motion estimation for dynamic MRI through optical flow constraint
NASA Astrophysics Data System (ADS)
Zhao, Ningning; O'Connor, Daniel; Gu, Wenbo; Ruan, Dan; Basarab, Adrian; Sheng, Ke
2018-03-01
This paper addresses the problem of dynamic magnetic resonance image (DMRI) reconstruction and motion estimation jointly. Because of the inherent anatomical movements in DMRI acquisition, reconstruction of DMRI using motion estimation/compensation (ME/MC) has been explored under the compressed sensing (CS) scheme. In this paper, by embedding the intensity based optical flow (OF) constraint into the traditional CS scheme, we are able to couple the DMRI reconstruction and motion vector estimation. Moreover, the OF constraint is employed in a specific coarse resolution scale in order to reduce the computational complexity. The resulting optimization problem is then solved using a primal-dual algorithm due to its efficiency when dealing with nondifferentiable problems. Experiments on highly accelerated dynamic cardiac MRI with multiple receiver coils validate the performance of the proposed algorithm.
Validation of vision-based range estimation algorithms using helicopter flight data
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1993-01-01
The objective of this research was to demonstrate the effectiveness of an optic flow method for passive range estimation using a Kalman-filter implementation with helicopter flight data. This paper is divided into the following areas: (1) ranging algorithm; (2) flight experiment; (3) analysis methodology; (4) results; and (5) concluding remarks. The discussion is presented in viewgraph format.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling
Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang
2013-01-01
The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1980-01-01
A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.
Joint parameter and state estimation algorithms for real-time traffic monitoring.
DOT National Transportation Integrated Search
2013-12-01
A common approach to traffic monitoring is to combine a macroscopic traffic flow model with traffic sensor data in a process called state estimation, data fusion, or data assimilation. The main challenge of traffic state estimation is the integration...
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less
Estimation of flow properties using surface deformation and head data: A trajectory-based approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, D.W.
2004-07-12
A trajectory-based algorithm provides an efficient and robust means to infer flow properties from surface deformation and head data. The algorithm is based upon the concept of an ''arrival time'' of a drawdown front, which is defined as the time corresponding to the maximum slope of the drawdown curve. The technique involves three steps: the inference of head changes as a function of position and time, the use of the estimated head changes to define arrival times, and the inversion of the arrival times for flow properties. Trajectories, computed from the output of a numerical simulator, are used to relatemore » the drawdown arrival times to flow properties. The inversion algorithm is iterative, requiring one reservoir simulation for each iteration. The method is applied to data from a set of 14 tiltmeters, located at the Raymond Quarry field site in California. Using the technique, I am able to image a high-conductivity channel which extends to the south of the pumping well. The presence of th is permeable pathway is supported by an analysis of earlier cross-well transient pressure test data.« less
Maximum likelihood phase-retrieval algorithm: applications.
Nahrstedt, D A; Southwell, W H
1984-12-01
The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.
NASA Astrophysics Data System (ADS)
Bonnema, Matthew G.; Sikder, Safat; Hossain, Faisal; Durand, Michael; Gleason, Colin J.; Bjerklie, David M.
2016-04-01
The objective of this study is to compare the effectiveness of three algorithms that estimate discharge from remotely sensed observables (river width, water surface height, and water surface slope) in anticipation of the forthcoming NASA/CNES Surface Water and Ocean Topography (SWOT) mission. SWOT promises to provide these measurements simultaneously, and the river discharge algorithms included here are designed to work with these data. Two algorithms were built around Manning's equation, the Metropolis Manning (MetroMan) method, and the Mean Flow and Geomorphology (MFG) method, and one approach uses hydraulic geometry to estimate discharge, the at-many-stations hydraulic geometry (AMHG) method. A well-calibrated and ground-truthed hydrodynamic model of the Ganges river system (HEC-RAS) was used as reference for three rivers from the Ganges River Delta: the main stem of Ganges, the Arial-Khan, and the Mohananda Rivers. The high seasonal variability of these rivers due to the Monsoon presented a unique opportunity to thoroughly assess the discharge algorithms in light of typical monsoon regime rivers. It was found that the MFG method provides the most accurate discharge estimations in most cases, with an average relative root-mean-squared error (RRMSE) across all three reaches of 35.5%. It is followed closely by the Metropolis Manning algorithm, with an average RRMSE of 51.5%. However, the MFG method's reliance on knowledge of prior river discharge limits its application on ungauged rivers. In terms of input data requirement at ungauged regions with no prior records, the Metropolis Manning algorithm provides a more practical alternative over a region that is lacking in historical observations as the algorithm requires less ancillary data. The AMHG algorithm, while requiring the least prior river data, provided the least accurate discharge measurements with an average wet and dry season RRMSE of 79.8% and 119.1%, respectively, across all rivers studied. This poor performance is directly traced to poor estimation of AMHG via a remotely sensed proxy, and results improve commensurate with MFG and MetroMan when prior AMHG information is given to the method. Therefore, we cannot recommend use of AMHG without inclusion of this prior information, at least for the studied rivers. The dry season discharge (within-bank flow) was captured well by all methods, while the wet season (floodplain flow) appeared more challenging. The picture that emerges from this study is that a multialgorithm approach may be appropriate during flood inundation periods in Ganges Delta.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Localization of source with unknown amplitude using IPMC sensor arrays
NASA Astrophysics Data System (ADS)
Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo
2011-04-01
The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.
Mesin, Luca
2015-02-01
Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-Test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
The role of optical flow in automated quality assessment of full-motion video
NASA Astrophysics Data System (ADS)
Harguess, Josh; Shafer, Scott; Marez, Diego
2017-09-01
In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.
2009-08-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.
2008-12-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
Viumdal, Håkon; Mylvaganam, Saba
2017-01-01
In oil and gas and geothermal installations, open channels followed by sieves for removal of drill cuttings, are used to monitor the quality and quantity of the drilling fluids. Drilling fluid flow rate is difficult to measure due to the varying flow conditions (e.g., wavy, turbulent and irregular) and the presence of drilling cuttings and gas bubbles. Inclusion of a Venturi section in the open channel and an array of ultrasonic level sensors above it at locations in the vicinity of and above the Venturi constriction gives the varying levels of the drilling fluid in the channel. The time series of the levels from this array of ultrasonic level sensors are used to estimate the drilling fluid flow rate, which is compared with Coriolis meter measurements. Fuzzy logic, neural networks and support vector regression algorithms applied to the data from temporal and spatial ultrasonic level measurements of the drilling fluid in the open channel give estimates of its flow rate with sufficient reliability, repeatability and uncertainty, providing a novel soft sensing of an important process variable. Simulations, cross-validations and experimental results show that feedforward neural networks with the Bayesian regularization learning algorithm provide the best flow rate estimates. Finally, the benefits of using this soft sensing technique combined with Venturi constriction in open channels are discussed. PMID:29072595
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System
NASA Technical Reports Server (NTRS)
Fuhrmann, Henri D.; Stewart, Eric C.
1996-01-01
Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.
Molecular dynamics study of solid-liquid heat transfer and passive liquid flow
NASA Astrophysics Data System (ADS)
Yesudasan Daisy, Sumith
High heat flux removal is a challenging problem in boilers, electronics cooling, concentrated photovoltaic and other power conversion devices. Heat transfer by phase change is one of the most efficient mechanisms for removing heat from a solid surface. Futuristic electronic devices are expected to generate more than 1000 W/cm2 of heat. Despite the advancements in microscale and nanoscale manufacturing, the maximum passive heat flux removal has been 300 W/cm2 in pool boiling. Such limitations can be overcome by developing nanoscale thin-film evaporation based devices, which however require a better understanding of surface interactions and liquid vapor phase change process. Evaporation based passive flow is an inspiration from the transpiration process that happens in trees. If we can mimic this process and develop heat removal devices, then we can develop efficient cooling devices. The existing passive flow based cooling devices still needs improvement to meet the future demands. To improve the efficiency and capacity of these devices, we need to explore and quantify the passive flow happening at nanoscales. Experimental techniques have not advanced enough to study these fundamental phenomena at the nanoscale, an alternative method is to perform theoretical study at nanoscales. Molecular dynamics (MD) simulation is a widely accepted powerful tool for studying a range of fundamental and engineering problems. MD simulations can be utilized to study the passive flow mechanism and heat transfer due to it. To study passive flow using MD, apart from the conventional methods available in MD, we need to have methods to simulate the heat transfer between solid and liquid, local pressure, surface tension, density, temperature calculation methods, realistic boundary conditions, etc. Heat transfer between solid and fluids has been a challenging area in MD simulations, and has only been minimally explored (especially for a practical fluid like water). Conventionally, an equilibrium canonical ensemble (NVT) is simulated using thermostat algorithms. For research in heat transfer involving solid liquid interaction, we need to perform non equilibrium MD (NEMD) simulations. In such NEMD simulations, the methods used for simulating heating from a surface is very important and must capture proper physics and thermodynamic properties. Development of MD simulation techniques to simulate solid-liquid heating and the study of fundamental mechanism of passive flow is the main focus of this thesis. An accurate surface-heating algorithm was developed for water which can now allow the study of a whole new set of fundamental heat transfer problems at the nanoscale like surface heating/cooling of droplets, thin-films, etc. The developed algorithm is implemented in the in-house developed C++ MD code. A direct two dimensional local pressure estimation algorithm is also formulated and implemented in the code. With this algorithm, local pressure of argon and platinum interaction is studied. Also, the surface tension of platinum-argon (solid-liquid) was estimated directly from the MD simulations for the first time. Contact angle estimation studies of water on platinum, and argon on platinum were also performed. A thin film of argon is kept above platinum plate and heated in the middle region, leading to the evaporation and pressure reduction thus creating a strong passive flow in the near surface region. This observed passive liquid flow is characterized by estimating the pressure, density, velocity and surface tension using Eulerian mapping method. Using these simulation, we have demonstrated the fundamental nature and origin of surface-driven passive flow. Heat flux removed from the surface is also estimated from the results, which shows a significant improvement can be achieved in thermal management of electronic devices by taking advantage of surface-driven strong passive liquid flow. Further, the local pressure of water on silicon di-oxide surface is estimated using the LAMMPS atomic to continuum (ATC) package towards the goal of simulating the passive flow in water.
NASA Astrophysics Data System (ADS)
Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.
2016-07-01
We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision
Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz
2016-01-01
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424
NASA Technical Reports Server (NTRS)
Brown, Nelson
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. This presentation also focuses on the design of the flight experiment and the practical challenges of conducting the experiment.
Rueckauer, Bodo; Delbruck, Tobi
2016-01-01
In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639
NASA Astrophysics Data System (ADS)
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
Range Image Flow using High-Order Polynomial Expansion
2013-09-01
included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Digital adaptive controllers for VTOL vehicles. Volume 2: Software documentation
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.; Pratt, S. G.
1979-01-01
The VTOL approach and landing test (VALT) adaptive software is documented. Two self-adaptive algorithms, one based on an implicit model reference design and the other on an explicit parameter estimation technique were evaluated. The organization of the software, user options, and a nominal set of input data are presented along with a flow chart and program listing of each algorithm.
Parameter estimation of an ARMA model for river flow forecasting using goal programming
NASA Astrophysics Data System (ADS)
Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene
2006-11-01
SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.
Robust Low-dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization
Zhang, Shaoting; Chen, Tsuhan; Sanelli, Pina C.
2016-01-01
Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. ‘Time is brain’ is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in-vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions. PMID:25706579
Optimal pressure regulation of the pneumatic ventricular assist device with bellows-type driver.
Lee, Jung Joo; Kim, Bum Soo; Choi, Jaesoon; Choi, Hyuk; Ahn, Chi Bum; Nam, Kyoung Won; Jeong, Gi Seok; Lim, Choon Hak; Son, Ho Sung; Sun, Kyung
2009-08-01
The bellows-type pneumatic ventricular assist device (VAD) generates pneumatic pressure with compression of bellows instead of using an air compressor. This VAD driver has a small volume that is suitable for portable devices. However, improper pneumatic pressure setup can not only cause a lack of adequate flow generation, but also cause durability problems. In this study, a pneumatic pressure regulation system for optimal operation of the bellows-type VAD has been developed. The optimal pneumatic pressure conditions according to various afterload conditions aiming for optimal flow rates were investigated, and an afterload estimation algorithm was developed. The developed regulation system, which consists of a pressure sensor and a two-way solenoid valve, estimates the current afterload and regulates the pneumatic pressure to the optimal point for the current afterload condition. Experiments were performed in a mock circulation system. The afterload estimation algorithm showed sufficient performance with the standard deviation of error, 8.8 mm Hg. The flow rate could be stably regulated with a developed system under various afterload conditions. The shortcoming of a bellows-type VAD could be handled with this simple pressure regulation system.
Outflow monitoring of a pneumatic ventricular assist device using external pressure sensors.
Kang, Seong Min; Her, Keun; Choi, Seong Wook
2016-08-25
In this study, a new algorithm was developed for estimating the pump outflow of a pneumatic ventricular assist device (p-VAD). The pump outflow estimation algorithm was derived from the ideal gas equation and determined the change in blood-sac volume of a p-VAD using two external pressure sensors. Based on in vitro experiments, the algorithm was revised to consider the effects of structural compliance caused by volume changes in an implanted unit, an air driveline, and the pressure difference between the sensors and the implanted unit. In animal experiments, p-VADs were connected to the left ventricles and the descending aorta of three calves (70-100 kg). Their outflows were estimated using the new algorithm and compared to the results obtained using an ultrasonic blood flow meter (UBF) (TS-410, Transonic Systems Inc., Ithaca, NY, USA). The estimated and measured values had a Pearson's correlation coefficient of 0.864. The pressure sensors were installed at the external controller and connected to the air driveline on the same side as the external actuator, which made the sensors easy to manage.
Adaptive mixed finite element methods for Darcy flow in fractured porous media
NASA Astrophysics Data System (ADS)
Chen, Huangxin; Salama, Amgad; Sun, Shuyu
2016-10-01
In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
Yadollahi, Azadeh; Montazeri, Aman; Azarbarzin, Ali; Moussavi, Zahra
2013-03-01
Tracheal respiratory sound analysis is a simple and non-invasive way to study the pathophysiology of the upper airway and has recently been used for acoustic estimation of respiratory flow and sleep apnea diagnosis. However in none of the previous studies was the respiratory flow-sound relationship studied in people with obstructive sleep apnea (OSA), nor during sleep. In this study, we recorded tracheal sound, respiratory flow, and head position from eight non-OSA and 10 OSA individuals during sleep and wakefulness. We compared the flow-sound relationship and variations in model parameters from wakefulness to sleep within and between the two groups. The results show that during both wakefulness and sleep, flow-sound relationship follows a power law but with different parameters. Furthermore, the variations in model parameters may be representative of the OSA pathology. The other objective of this study was to examine the accuracy of respiratory flow estimation algorithms during sleep: we investigated two approaches for calibrating the model parameters using the known data recorded during either wakefulness or sleep. The results show that the acoustical respiratory flow estimation parameters change from wakefulness to sleep. Therefore, if the model is calibrated using wakefulness data, although the estimated respiratory flow follows the relative variations of the real flow, the quantitative flow estimation error would be high during sleep. On the other hand, when the calibration parameters are extracted from tracheal sound and respiratory flow recordings during sleep, the respiratory flow estimation error is less than 10%.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
NASA Astrophysics Data System (ADS)
Mellal, Idir; Laghrouche, Mourad; Bui, Hung Tien
2017-04-01
This paper describes a non-invasive system for respiratory monitoring using a Micro Electro Mechanical Systems (MEMS) flow sensor and an IMU (Inertial Measurement Unit) accelerometer. The designed system is intended to be wearable and used in a hospital or at home to assist people with respiratory disorders. To ensure the accuracy of our system, we proposed a calibration method based on ANN (Artificial Neural Network) to compensate the temperature drift of the silicon flow sensor. The sigmoid activation functions used in the ANN model were computed with the CORDIC (COordinate Rotation DIgital Computer) algorithm. This algorithm was also used to estimate the tilt angle in body position. The design was implemented on reconfigurable platform FPGA.
Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C
2013-03-01
To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.
Driving mechanism of unsteady separation shock motion in hypersonic interactive flow
NASA Technical Reports Server (NTRS)
Dolling, D. S.; Narlo, J. C., II
1987-01-01
Wall pressure fluctuations were measured under the steady separation shock waves in Mach 5 turbulent interactions induced by unswept circular cylinders on a flat plate. The wall temperature was adiabatic. A conditional sampling algorithm was developed to examine the statistics of the shock wave motion. The same algorithm was used to examine data taken in earlier studies in the Princeton University Mach 3 blowdown tunnel. In these earlier studies, hemicylindrically blunted fins of different leading-edge diameters were tested in boundary layers which developed on the tunnel floor and on a flat plate. A description of the algorithm, the reasons why it was developed and the sensitivity of the results to the threshold settings, are discussed. The results from the algorithm, together with cross correlations and power spectral density estimates suggests that the shock motion is driven by the low-frequency unsteadiness of the downstream separated, vortical flow.
Variable parameter McCarthy-Muskingum routing method considering lateral flow
NASA Astrophysics Data System (ADS)
Yadav, Basant; Perumal, Muthiah; Bardossy, Andras
2015-04-01
The fully mass conservative variable parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price (2013) for routing floods in channels and rivers without considering lateral flow is extended herein for accounting uniformly distributed lateral flow contribution along the reach. The proposed procedure is applied for studying flood wave movement in a 24.2 km river stretch between Rottweil and Oberndorf gauging stations of Neckar River in Germany wherein significant lateral flow contribution by intermediate catchment rainfall prevails during flood wave movement. The geometrical elements of the cross-sectional information of the considered routing river stretch without considering lateral flow are estimated using the Robust Parameter Estimation (ROPE) algorithm that allows for arriving at the best performing set of bed width and side slope of a trapezoidal section. The performance of the VPMM method is evaluated using the Nash-Sutcliffe model efficiency criterion as the objective function to be maximized using the ROPE algorithm. The twenty-seven flood events in the calibration set are considered to identify the relationship between 'total rainfall' and 'total losses' as well as to optimize the geometric characteristics of the prismatic channel (width and slope of the trapezoidal section). Based on this analysis, a relationship between total rainfall and total loss of the intermediate catchment is obtained and then used to estimate the lateral flow in the reach. Assuming the lateral flow hydrograph is of the form of inflow hydrograph and using the total intervening catchment runoff estimated from the relationship, the uniformly distributed lateral flow rate qL at any instant of time is estimated for its use in the VPMM routing method. All the 27 flood events are simulated using this routing approach considering lateral flow along the reach. Many of these simulations are able to simulate the observed hydrographs very closely. The proposed approach of accounting lateral flow using the VPMM method is independently verified by routing flood hydrograph of 6 flood events which are not used in the total rainfall vs total loss relationship established for the intervening catchment of the studied river reach. Close reproduction of the outflow hydrographs of these independent events using the proposed VPMM method accounting for lateral flow demonstrate the practical utility of the method.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
A hierarchical framework for air traffic control
NASA Astrophysics Data System (ADS)
Roy, Kaushik
Air travel in recent years has been plagued by record delays, with over $8 billion in direct operating costs being attributed to 100 million flight delay minutes in 2007. Major contributing factors to delay include weather, congestion, and aging infrastructure; the Next Generation Air Transportation System (NextGen) aims to alleviate these delays through an upgrade of the air traffic control system. Changes to large-scale networked systems such as air traffic control are complicated by the need for coordinated solutions over disparate temporal and spatial scales. Individual air traffic controllers must ensure aircraft maintain safe separation locally with a time horizon of seconds to minutes, whereas regional plans are formulated to efficiently route flows of aircraft around weather and congestion on the order of every hour. More efficient control algorithms that provide a coordinated solution are required to safely handle a larger number of aircraft in a fixed amount of airspace. Improved estimation algorithms are also needed to provide accurate aircraft state information and situational awareness for human controllers. A hierarchical framework is developed to simultaneously solve the sometimes conflicting goals of regional efficiency and local safety. Careful attention is given in defining the interactions between the layers of this hierarchy. In this way, solutions to individual air traffic problems can be targeted and implemented as needed. First, the regional traffic flow management problem is posed as an optimization problem and shown to be NP-Hard. Approximation methods based on aggregate flow models are developed to enable real-time implementation of algorithms that reduce the impact of congestion and adverse weather. Second, the local trajectory design problem is solved using a novel slot-based sector model. This model is used to analyze sector capacity under varying traffic patterns, providing a more comprehensive understanding of how increased automation in NextGen will affect the overall performance of air traffic control. The dissertation also provides solutions to several key estimation problems that support corresponding control tasks. Throughout the development of these estimation algorithms, aircraft motion is modeled using hybrid systems, which encapsulate both the discrete flight mode of an aircraft and the evolution of continuous states such as position and velocity. The target-tracking problem is posed as one of hybrid state estimation, and two new algorithms are developed to exploit structure specific to aircraft motion, especially near airports. First, discrete mode evolution is modeled using state-dependent transitions, in which the likelihood of changing flight modes is dependent on aircraft state. Second, an estimator is designed for systems with limited mode changes, including arrival aircraft. Improved target tracking facilitates increased safety in collision avoidance and trajectory design problems. A multiple-target tracking and identity management algorithm is developed to improve situational awareness for controllers about multiple maneuvering targets in a congested region. Finally, tracking algorithms are extended to predict aircraft landing times; estimated time of arrival prediction is one example of important decision support information for air traffic control.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
NASA Technical Reports Server (NTRS)
Matthews, Bryan L.; Srivastava, Ashok N.
2010-01-01
Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.
NASA Astrophysics Data System (ADS)
Hou, Huirang; Zheng, Dandan; Nie, Laixiao
2015-04-01
For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.
Exploring SWOT discharge algorithm accuracy on the Sacramento River
NASA Astrophysics Data System (ADS)
Durand, M. T.; Yoon, Y.; Rodriguez, E.; Minear, J. T.; Andreadis, K.; Pavelsky, T. M.; Alsdorf, D. E.; Smith, L. C.; Bales, J. D.
2012-12-01
Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on several specific research questions affecting algorithm performance: 1) To what extent do lateral inflows confound algorithm performance? We examine the ~100 km stretch of river from Colusa, CA to the Yolo Bypass, and investigate how the varying degrees of lateral flows affect algorithm performance. 2) To what extent does a simple slope-area method (i.e. Manning's equation) applied to river reaches accurately describe river discharge? 3) How accurately does the algorithm perform an inversion to accurately describe the river bathymetry and roughness coefficient? Finally, we explore the sensitivity of the algorithm to the number of AirSWOT flights and AirSWOT measurement precision for various river flow scenarios.
Unsteady flow sensing and optimal sensor placement using machine learning
NASA Astrophysics Data System (ADS)
Semaan, Richard
2016-11-01
Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.
An Uncertainty Quantification Framework for Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Hobbs, J.
2017-12-01
Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
Reitz, Meredith; Sanford, Ward E.; Senay, Gabriel; Cazenas, J.
2017-01-01
This study presents new data-driven, annual estimates of the division of precipitation into the recharge, quick-flow runoff, and evapotranspiration (ET) water budget components for 2000-2013 for the contiguous United States (CONUS). The algorithms used to produce these maps ensure water budget consistency over this broad spatial scale, with contributions from precipitation influx attributed to each component at 800 m resolution. The quick-flow runoff estimates for the contribution to the rapidly varying portion of the hydrograph are produced using data from 1,434 gaged watersheds, and depend on precipitation, soil saturated hydraulic conductivity, and surficial geology type. Evapotranspiration estimates are produced from a regression using water balance data from 679 gaged watersheds and depend on land cover, temperature, and precipitation. The quick-flow and ET estimates are combined to calculate recharge as the remainder of precipitation. The ET and recharge estimates are checked against independent field data, and the results show good agreement. Comparisons of recharge estimates with groundwater extraction data show that in 15% of the country, groundwater is being extracted at rates higher than the local recharge. These maps of the internally consistent water budget components of recharge, quick-flow runoff, and ET, being derived from and tested against data, are expected to provide reliable first-order estimates of these quantities across the CONUS, even where field measurements are sparse.
Computation of fluid flow and pore-space properties estimation on micro-CT images of rock samples
NASA Astrophysics Data System (ADS)
Starnoni, M.; Pokrajac, D.; Neilson, J. E.
2017-09-01
Accurate determination of the petrophysical properties of rocks, namely REV, mean pore and grain size and absolute permeability, is essential for a broad range of engineering applications. Here, the petrophysical properties of rocks are calculated using an integrated approach comprising image processing, statistical correlation and numerical simulations. The Stokes equations of creeping flow for incompressible fluids are solved using the Finite-Volume SIMPLE algorithm. Simulations are then carried out on three-dimensional digital images obtained from micro-CT scanning of two rock formations: one sandstone and one carbonate. Permeability is predicted from the computed flow field using Darcy's law. It is shown that REV, REA and mean pore and grain size are effectively estimated using the two-point spatial correlation function. Homogeneity and anisotropy are also evaluated using the same statistical tools. A comparison of different absolute permeability estimates is also presented, revealing a good agreement between the numerical value and the experimentally determined one for the carbonate sample, but a large discrepancy for the sandstone. Finally, a new convergence criterion for the SIMPLE algorithm, and more generally for the family of pressure-correction methods, is presented. This criterion is based on satisfaction of bulk momentum balance, which makes it particularly useful for pore-scale modelling of reservoir rocks.
Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Smirnova, Z. N.
2015-05-01
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
NASA Astrophysics Data System (ADS)
Ribeiro, J. B.; Silva, C.; Mendes, R.
2010-10-01
A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Setting the scene for SWOT: global maps of river reach hydrodynamic variables
NASA Astrophysics Data System (ADS)
Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George
2017-04-01
Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
Optical Flow Analysis and Kalman Filter Tracking in Video Surveillance Algorithms
2007-06-01
Grover Brown and Patrick Y.C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third edition, John Wiley & Sons, New York, 1997...noise. Brown and Hwang [6] achieve this improvement by linearly blending the prior estimate, 1kx ∧ − , with the noisy measurement, kz , in the equation...AND KALMAN FILTER TRACKING IN VIDEO SURVEILLANCE ALGORITHMS by David A. Semko June 2007 Thesis Advisor: Monique P. Fargues Second
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Hasin, Tal; Huebner, Marianne; Li, Zhuo; Brown, Daniel; Stulak, John M; Boilson, Barry A; Joyce, Lyle; Pereira, Naveen L; Kushwaha, Sudhir S; Park, Soon J
2014-01-01
Cardiac output (CO) assessment is important in treating patients with heart failure. Durable left ventricular assist devices (LVADs) provide essentially all CO. In currently used LVADs, estimated device flow is generated by a computerized algorithm. However, LVAD flow estimate may be inaccurate in tracking true CO. We correlated LVAD (HeartMate II) flow with thermodilution CO during postoperative care (day 2-10 after implant) in 81 patients (5,616 paired measurements). Left ventricular assist device flow and CO correlated with a low correlation coefficient (r = 0.42). Left ventricular assist device readings were lower than CO measurements by approximately 0.36 L/min, trending for larger difference with higher values. Left ventricular assist device flow measurements showed less temporal variability compared with CO. Grouping for simultaneous measured blood pressure (BP < 60, 60-70, 70-80, 80-90, and ≥90), the correlation of CO with LVAD flow differed (R = 0.42, 0.67, 0.48, 0.32, 0.32, respectively). Indicating better correlation when mean blood pressure is 60 to 70 mm Hg. Left ventricular assist device flow generally trends with measured CO, but large variability exists, hence flow measures should not be assumed to equal with CO. Clinicians should take into account variables such as high CO, BP, and opening of the aortic valve when interpreting LVAD flow readout. Direct flow sensors incorporated in the LVAD system may allow for better estimation.
Rényi information flow in the Ising model with single-spin dynamics.
Deng, Zehui; Wu, Jinshan; Guo, Wenan
2014-12-01
The n-index Rényi mutual information and transfer entropies for the two-dimensional kinetic Ising model with arbitrary single-spin dynamics in the thermodynamic limit are derived as functions of ensemble averages of observables and spin-flip probabilities. Cluster Monte Carlo algorithms with different dynamics from the single-spin dynamics are thus applicable to estimate the transfer entropies. By means of Monte Carlo simulations with the Wolff algorithm, we calculate the information flows in the Ising model with the Metropolis dynamics and the Glauber dynamics, respectively. We find that not only the global Rényi transfer entropy, but also the pairwise Rényi transfer entropy, peaks in the disorder phase.
NASA Astrophysics Data System (ADS)
Orłowska-Szostak, Maria; Orłowski, Ryszard
2017-11-01
The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Investigation of Convection and Pressure Treatment with Splitting Techniques
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei; Liou, Meng-Sing
1995-01-01
Treatment of convective and pressure fluxes in the Euler and Navier-Stokes equations using splitting formulas for convective velocity and pressure is investigated. Two schemes - controlled variation scheme (CVS) and advection upstream splitting method (AUSM) - are explored for their accuracy in resolving sharp gradients in flows involving moving or reflecting shock waves as well as a one-dimensional combusting flow with a strong heat release source term. For two-dimensional compressible flow computations, these two schemes are implemented in one of the pressure-based algorithms, whose very basis is the separate treatment of convective and pressure fluxes. For the convective fluxes in the momentum equations as well as the estimation of mass fluxes in the pressure correction equation (which is derived from the momentum and continuity equations) of the present algorithm, both first- and second-order (with minmod limiter) flux estimations are employed. Some issues resulting from the conventional use in pressure-based methods of a staggered grid, for the location of velocity components and pressure, are also addressed. Using the second-order fluxes, both CVS and AUSM type schemes exhibit sharp resolution. Overall, the combination of upwinding and splitting for the convective and pressure fluxes separately exhibits robust performance for a variety of flows and is particularly amenable for adoption in pressure-based methods.
Schroeder, Lee F; Robilotti, Elizabeth; Peterson, Lance R; Banaei, Niaz; Dowdy, David W
2014-02-01
Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI.
Robilotti, Elizabeth; Peterson, Lance R.; Banaei, Niaz; Dowdy, David W.
2014-01-01
Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI. PMID:24478478
A new contrast-assisted method in microcirculation volumetric flow assessment
NASA Astrophysics Data System (ADS)
Lu, Sheng-Yi; Chen, Yung-Sheng; Yeh, Chih-Kuang
2007-03-01
Microcirculation volumetric flow rate is a significant index in diseases diagnosis and treatment such as diabetes and cancer. In this study, we propose an integrated algorithm to assess microcirculation volumetric flow rate including estimation of blood perfused area and corresponding flow velocity maps based on high frequency destruction/contrast replenishment imaging technique. The perfused area indicates the blood flow regions including capillaries, arterioles and venules. Due to the echo variance changes between ultrasonic contrast agents (UCAs) pre- and post-destruction two images, the perfused area can be estimated by the correlation-based approach. The flow velocity distribution within the perfused area can be estimated by refilling time-intensity curves (TICs) after UCAs destruction. Most studies introduced the rising exponential model proposed by Wei (1998) to fit the TICs. Nevertheless, we found the TICs profile has a great resemblance to sigmoid function in simulations and in vitro experiments results. Good fitting correlation reveals that sigmoid model was more close to actual fact in describing destruction/contrast replenishment phenomenon. We derived that the saddle point of sigmoid model is proportional to blood flow velocity. A strong linear relationship (R = 0.97) between the actual flow velocities (0.4-2.1 mm/s) and the estimated saddle constants was found in M-mode and B-mode flow phantom experiments. Potential applications of this technique include high-resolution volumetric flow rate assessment in small animal tumor and the evaluation of superficial vasculature in clinical studies.
Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.
2006-01-01
Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800 °C.
Computing return times or return periods with rare event algorithms
NASA Astrophysics Data System (ADS)
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
An optical flow-based method for velocity field of fluid flow estimation
NASA Astrophysics Data System (ADS)
Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz
2017-06-01
The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Cerebral palsy characterization by estimating ocular motion
NASA Astrophysics Data System (ADS)
González, Jully; Atehortúa, Angélica; Moncayo, Ricardo; Romero, Eduardo
2017-11-01
Cerebral palsy (CP) is a large group of motion and posture disorders caused during the fetal or infant brain development. Sensorial impairment is commonly found in children with CP, i.e., between 40-75 percent presents some form of vision problems or disabilities. An automatic characterization of the cerebral palsy is herein presented by estimating the ocular motion during a gaze pursuing task. Specifically, After automatically detecting the eye location, an optical flow algorithm tracks the eye motion following a pre-established visual assignment. Subsequently, the optical flow trajectories are characterized in the velocity-acceleration phase plane. Differences are quantified in a small set of patients between four to ten years.
Application of Satellite-Derived Atmospheric Motion Vectors for Estimating Mesoscale Flows.
NASA Astrophysics Data System (ADS)
Bedka, Kristopher M.; Mecikalski, John R.
2005-11-01
This study demonstrates methods to obtain high-density, satellite-derived atmospheric motion vectors (AMV) that contain both synoptic-scale and mesoscale flow components associated with and induced by cumuliform clouds through adjustments made to the University of Wisconsin—Madison Cooperative Institute for Meteorological Satellite Studies (UW-CIMSS) AMV processing algorithm. Operational AMV processing is geared toward the identification of synoptic-scale motions in geostrophic balance, which are useful in data assimilation applications. AMVs identified in the vicinity of deep convection are often rejected by quality-control checks used in the production of operational AMV datasets. Few users of these data have considered the use of AMVs with ageostrophic flow components, which often fail checks that assure both spatial coherence between neighboring AMVs and a strong correlation to an NWP-model first-guess wind field. The UW-CIMSS algorithm identifies coherent cloud and water vapor features (i.e., targets) that can be tracked within a sequence of geostationary visible (VIS) and infrared (IR) imagery. AMVs are derived through the combined use of satellite feature tracking and an NWP-model first guess. Reducing the impact of the NWP-model first guess on the final AMV field, in addition to adjusting the target selection and vector-editing schemes, is found to result in greater than a 20-fold increase in the number of AMVs obtained from the UW-CIMSS algorithm for one convective storm case examined here. Over a three-image sequence of Geostationary Operational Environmental Satellite (GOES)-12 VIS and IR data, 3516 AMVs are obtained, most of which contain flow components that deviate considerably from geostrophy. In comparison, 152 AMVs are derived when a tighter NWP-model constraint and no targeting adjustments were imposed, similar to settings used with operational AMV production algorithms. A detailed analysis reveals that many of these 3516 vectors contain low-level (100 70 kPa) convergent and midlevel (70 40 kPa) to upper-level (40 10 kPa) divergent motion components consistent with localized mesoscale flow patterns. The applicability of AMVs for estimating cloud-top cooling rates at the 1-km pixel scale is demonstrated with excellent correspondence to rates identified by a human expert.
An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.
2003-01-01
An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.
A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows
NASA Astrophysics Data System (ADS)
Cardwell, Nicholas D.; Vlachos, Pavlos P.; Thole, Karen A.
2011-10-01
Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas-solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of instantaneous particle slip velocities, illustrating the algorithm's strength to robustly and accurately resolve polydispersed MPFs.
Wang, Yu; Koenig, Steven C; Slaughter, Mark S; Giridharan, Guruprasad A
2015-01-01
The risk for left ventricular (LV) suction during left ventricular assist devices (LVAD) support has been a clinical concern. Current development efforts suggest LVAD suction prevention and physiologic control algorithms may require chronic implantation of pressure or flow sensors, which can be unreliable because of baseline drift and short lifespan. To overcome this limitation, we designed a sensorless suction prevention and physiologic control (eSPPC) algorithm that only requires LVAD intrinsic parameters (pump speed and power). Two gain-scheduled, proportional-integral controllers maintain a differential pump speed (ΔRPM) above a user-defined threshold to prevent LV suction while maintaining an average reference differential pressure (ΔP) between the LV and aorta. ΔRPM is calculated from noisy pump speed measurements that are low-pass filtered, and ΔP is estimated using an extended Kalman filter. Efficacy and robustness of the eSPPC algorithm were evaluated in silico during simulated rest and exercise test conditions for 1) excessive ΔP setpoint (ES); 2) rapid eightfold increase in pulmonary vascular resistance (PVR); and 3) ES and PVR. Simulated hemodynamic waveforms (LV pressure and volume; aortic pressure and flow) using only intrinsic pump parameters showed the feasibility of our proposed eSPPC algorithm in preventing LV suction for all test conditions.
Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".
Schroeder, Christopher L; Hartmann, Mitra J Z
2012-01-01
When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael
Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less
Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan
2014-01-01
Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742
NASA Astrophysics Data System (ADS)
Durand, Michael; Neal, Jeff; Rodriguez, Ernesto
2013-09-01
The Surface Water and Ocean Topography (SWOT) satellite is a swath-mapping radar interferometer that will provide water elevations over inland water bodies and over the ocean. Here we present a Bayesian algorithm that calculates a best estimate of river bathymetry, roughness coefficient, and discharge based on measurements of river height and slope. On the River Severn, UK, we use gage estimates of height and slope during an in-bank flow event to illustrate algorithm functionality. We validate our estimates of river bathymetry and discharge using in situ measurements. We first assumed that the lateral inflows from smaller tributaries were known. In this case, an accurate inverse to bathymetry and roughness was obtained giving a discharge RMSE of 10 %. We then allowed the lateral inflows to be unknown; accuracy in the bathymetry estimates dropped in this case, giving a discharge RMSE of 36 %. Finally, we explored the case where bathymetry in one reach was known; in this case, discharge RMSE was 15.6 %.
NASA Astrophysics Data System (ADS)
Palatella, Luigi; Trevisan, Anna; Rambaldi, Sandro
2013-08-01
Valuable information for estimating the traffic flow is obtained with current GPS technology by monitoring position and velocity of vehicles. In this paper, we present a proof of concept study that shows how the traffic state can be estimated using only partial and noisy data by assimilating them in a dynamical model. Our approach is based on a data assimilation algorithm, developed by the authors for chaotic geophysical models, designed to be equivalent but computationally much less demanding than the traditional extended Kalman filter. Here we show that the algorithm is even more efficient if the system is not chaotic and demonstrate by numerical experiments that an accurate reconstruction of the complete traffic state can be obtained at a very low computational cost by monitoring only a small percentage of vehicles.
Blood flow quantification using 1D CFD parameter identification
NASA Astrophysics Data System (ADS)
Brosig, Richard; Kowarschik, Markus; Maday, Peter; Katouzian, Amin; Demirci, Stefanie; Navab, Nassir
2014-03-01
Patient-specific measurements of cerebral blood flow provide valuable diagnostic information concerning cerebrovascular diseases rather than visually driven qualitative evaluation. In this paper, we present a quantitative method to estimate blood flow parameters with high temporal resolution from digital subtraction angiography (DSA) image sequences. Using a 3D DSA dataset and a 2D+t DSA sequence, the proposed algorithm employs a 1D Computational Fluid Dynamics (CFD) model for estimation of time-dependent flow values along a cerebral vessel, combined with an additional Advection Diffusion Equation (ADE) for contrast agent propagation. The CFD system, followed by the ADE, is solved with a finite volume approximation, which ensures the conservation of mass. Instead of defining a new imaging protocol to obtain relevant data, our cost function optimizes the bolus arrival time (BAT) of the contrast agent in 2D+t DSA sequences. The visual determination of BAT is common clinical practice and can be easily derived from and be compared to values, generated by a 1D-CFD simulation. Using this strategy, we ensure that our proposed method fits best to clinical practice and does not require any changes to the medical work flow. Synthetic experiments show that the recovered flow estimates match the ground truth values with less than 12% error in the mean flow rates.
Integrated Traffic Flow Management Decision Making
NASA Technical Reports Server (NTRS)
Grabbe, Shon R.; Sridhar, Banavar; Mukherjee, Avijit
2009-01-01
A generalized approach is proposed to support integrated traffic flow management decision making studies at both the U.S. national and regional levels. It can consider tradeoffs between alternative optimization and heuristic based models, strategic versus tactical flight controls, and system versus fleet preferences. Preliminary testing was accomplished by implementing thirteen unique traffic flow management models, which included all of the key components of the system and conducting 85, six-hour fast-time simulation experiments. These experiments considered variations in the strategic planning look-ahead times, the replanning intervals, and the types of traffic flow management control strategies. Initial testing indicates that longer strategic planning look-ahead times and re-planning intervals result in steadily decreasing levels of sector congestion for a fixed delay level. This applies when accurate estimates of the air traffic demand, airport capacities and airspace capacities are available. In general, the distribution of the delays amongst the users was found to be most equitable when scheduling flights using a heuristic scheduling algorithm, such as ration-by-distance. On the other hand, equity was the worst when using scheduling algorithms that took into account the number of seats aboard each flight. Though the scheduling algorithms were effective at alleviating sector congestion, the tactical rerouting algorithm was the primary control for avoiding en route weather hazards. Finally, the modeled levels of sector congestion, the number of weather incursions, and the total system delays, were found to be in fair agreement with the values that were operationally observed on both good and bad weather days.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.
2015-12-01
The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.
Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool
NASA Technical Reports Server (NTRS)
Bloem, Michael; Hattaway, David; Bambos, Nicholas
2012-01-01
Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.
Direct process estimation from tomographic data using artificial neural systems
NASA Astrophysics Data System (ADS)
Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.
2001-07-01
The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu
2016-04-01
Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non-dominated sorting genetic algorithm II (NSGA-II), Sustainable water resources management, Flow regime, River ecosystem.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
Lee, Kwang Jin; Lee, Boreom
2016-01-01
Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR. PMID:27376296
Lee, Kwang Jin; Lee, Boreom
2016-07-01
Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
Enhancement of automated blood flow estimates (ENABLE) from arterial spin-labeled MRI.
Shirzadi, Zahra; Stefanovic, Bojana; Chappell, Michael A; Ramirez, Joel; Schwindt, Graeme; Masellis, Mario; Black, Sandra E; MacIntosh, Bradley J
2018-03-01
To validate a multiparametric automated algorithm-ENhancement of Automated Blood fLow Estimates (ENABLE)-that identifies useful and poor arterial spin-labeled (ASL) difference images in multiple postlabeling delay (PLD) acquisitions and thereby improve clinical ASL. ENABLE is a sort/check algorithm that uses a linear combination of ASL quality features. ENABLE uses simulations to determine quality weighting factors based on an unconstrained nonlinear optimization. We acquired a set of 6-PLD ASL images with 1.5T or 3.0T systems among 98 healthy elderly and adults with mild cognitive impairment or dementia. We contrasted signal-to-noise ratio (SNR) of cerebral blood flow (CBF) images obtained with ENABLE vs. conventional ASL analysis. In a subgroup, we validated our CBF estimates with single-photon emission computed tomography (SPECT) CBF images. ENABLE produced significantly increased SNR compared to a conventional ASL analysis (Wilcoxon signed-rank test, P < 0.0001). We also found the similarity between ASL and SPECT was greater when using ENABLE vs. conventional ASL analysis (n = 51, Wilcoxon signed-rank test, P < 0.0001) and this similarity was strongly related to ASL SNR (t = 24, P < 0.0001). These findings suggest that ENABLE improves CBF image quality from multiple PLD ASL in dementia cohorts at either 1.5T or 3.0T, achieved by multiparametric quality features that guided postprocessing of dementia ASL. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:647-655. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Chen, C. P.; Wu, S. T.
1992-01-01
The objective of this investigation has been to develop an algorithm (or algorithms) for the improvement of the accuracy and efficiency of the computer fluid dynamics (CFD) models to study the fundamental physics of combustion chamber flows, which are necessary ultimately for the design of propulsion systems such as SSME and STME. During this three year study (May 19, 1978 - May 18, 1992), a unique algorithm was developed for all speed flows. This newly developed algorithm basically consists of two pressure-based algorithms (i.e. PISOC and MFICE). This PISOC is a non-iterative scheme and the FICE is an iterative scheme where PISOC has the characteristic advantages on low and high speed flows and the modified FICE has shown its efficiency and accuracy to compute the flows in the transonic region. A new algorithm is born from a combination of these two algorithms. This newly developed algorithm has general application in both time-accurate and steady state flows, and also was tested extensively for various flow conditions, such as turbulent flows, chemically reacting flows, and multiphase flows.
A hybrid method of estimating pulsating flow parameters in the space-time domain
NASA Astrophysics Data System (ADS)
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
NASA Astrophysics Data System (ADS)
Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz
2010-09-01
This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
An Anisotropic A posteriori Error Estimator for CFD
NASA Astrophysics Data System (ADS)
Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando
In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.
NASA Astrophysics Data System (ADS)
Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo
2018-03-01
Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.
Gao, Hang; Bijnens, Nathalie; Coisne, Damien; Lugiez, Mathieu; Rutten, Marcel; D'hooge, Jan
2015-01-01
Despite the availability of multiple ultrasound approaches to left ventricular (LV) flow characterization in two dimensions, this technique remains in its childhood and further developments seem warranted. This article describes a new methodology for tracking the 2-D LV flow field based on ultrasound data. Hereto, a standard speckle tracking algorithm was modified by using a dynamic kernel embedding Navier-Stokes-based regularization in an iterative manner. The performance of the proposed approach was first quantified in synthetic ultrasound data based on a computational fluid dynamics model of LV flow. Next, an experimental flow phantom setup mimicking the normal human heart was used for experimental validation by employing simultaneous optical particle image velocimetry as a standard reference technique. Finally, the applicability of the approach was tested in a clinical setting. On the basis of the simulated data, pointwise evaluation of the estimated velocity vectors correlated well (mean r = 0.84) with the computational fluid dynamics measurement. During the filling period of the left ventricle, the properties of the main vortex obtained from the proposed method were also measured, and their correlations with the reference measurement were also calculated (radius, r = 0.96; circulation, r = 0.85; weighted center, r = 0.81). In vitro results at 60 bpm during one cardiac cycle confirmed that the algorithm properly measures typical characteristics of the vortex (radius, r = 0.60; circulation, r = 0.81; weighted center, r = 0.92). Preliminary qualitative results on clinical data revealed physiologic flow fields. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Computation of viscous flows over airfoils, including separation, with a coupling approach
NASA Technical Reports Server (NTRS)
Leballeur, J. C.
1983-01-01
Viscous incompressible flows over single or multiple airfoils, with or without separation, were computed using an inviscid flow calculation, with modified boundary conditions, and by a method providing calculation and coupling for boundary layers and wakes, within conditions of strong viscous interaction. The inviscid flow is calculated with a method of singularities, the numerics of which were improved by using both source and vortex distributions over profiles, associated with regularity conditions for the fictitious flows inside of the airfoils. The viscous calculation estimates the difference between viscous flow and inviscid interacting flow, with a direct or inverse integral method, laminar or turbulent, with or without reverse flow. The numerical method for coupling determines iteratively the boundary conditions for the inviscid flow. For attached viscous layers regions, an underrelaxation is locally calculated to insure stability. For separated or separating regions, a special semi-inverse algorithm is used. Comparisons with experiments are presented.
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Brown, Nelson
2013-01-01
A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an FA-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This presentation presents the design and integration of this peak-seeking controller on a modified NASA FA-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom FA-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Brown, Nelson A.
2013-01-01
A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This paper presents the design and integration of this peak-seeking controller on a modified NASA F/A-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom F/A-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.
Estimating zero-g flow rates in open channels having capillary pumping vanes
NASA Astrophysics Data System (ADS)
Srinivasan, Radhakrishnan
2003-02-01
In vane-type surface tension propellant management devices (PMD) commonly used in satellite fuel tanks, the propellant is transported along guiding vanes from a reservoir at the inlet of the device to a sump at the outlet from where it is pumped to the satellite engine. The pressure gradient driving this free-surface flow under zero-gravity (zero-g) conditions is generated by surface tension and is related to the differential curvatures of the propellant-gas interface at the inlet and outlet of the PMD. A new semi-analytical procedure is prescribed for accurately calculating the extremely small fuel flow rates under reasonably idealized conditions. Convergence of the algorithm is demonstrated by detailed numerical calculations. Owing to the substantial cost and the technical hurdles involved in accurately estimating these minuscule flow rates by either direct numerical simulation or by experimental methods which simulate zero-g conditions in the lab, it is expected that the proposed method will be an indispensable tool in the design and operation of satellite fuel tanks.
Uncertainty in simulated groundwater-quality trends in transient flow
Starn, J. Jeffrey; Bagtzoglou, Amvrossios; Robbins, Gary A.
2013-01-01
In numerical modeling of groundwater flow, the result of a given solution method is affected by the way in which transient flow conditions and geologic heterogeneity are simulated. An algorithm is demonstrated that simulates breakthrough curves at a pumping well by convolution-based particle tracking in a transient flow field for several synthetic basin-scale aquifers. In comparison to grid-based (Eulerian) methods, the particle (Lagrangian) method is better able to capture multimodal breakthrough caused by changes in pumping at the well, although the particle method may be apparently nonlinear because of the discrete nature of particle arrival times. Trial-and-error choice of number of particles and release times can perhaps overcome the apparent nonlinearity. Heterogeneous aquifer properties tend to smooth the effects of transient pumping, making it difficult to separate their effects in parameter estimation. Porosity, a new parameter added for advective transport, can be accurately estimated using both grid-based and particle-based methods, but predictions can be highly uncertain, even in the simple, nonreactive case.
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
Iyer, Swathi; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel; Fair, Damien
2013-01-01
Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al (2011), and apply a Bayesian approach called the PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054
Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor
NASA Astrophysics Data System (ADS)
Nagy, J.; Kelly, K.
2013-09-01
Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction required to achieve the water quality standard was estimated. The R(2) value for the calibrated BOD5 was 0.60, which is a moderate result, and the R(2) value for the TP was 0.86, which is a good result. The percent differences obtained for the calibrated BOD5 and TP were very good; therefore, the calibration results using WMCIG were satisfactory. From the load duration curve analysis, the WQS exceedance frequencies of the BOD5 under dry conditions and low-flow conditions were 75.7% and 65%, respectively, and the exceedance frequencies under moist and mid-range conditions were higher than under other conditions. The exceedance frequencies of the TP for the high-flow, moist and mid-range conditions were high and the exceedance rate for the high-flow condition was particularly high. Most of the data from the high-flow conditions exceeded the WQSs. Thus, nonpoint-source pollutants from storm-water runoff substantially affected the TP concentration in the Gomakwoncheon. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
NASA Astrophysics Data System (ADS)
Menze, Moritz; Heipke, Christian; Geiger, Andreas
2018-06-01
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.
Simulation of Surface-Water Conditions in the Nontidal Passaic River Basin, New Jersey
Spitz, Frederick J.
2007-01-01
The Passaic River Basin, the third largest drainage basin in New Jersey, encompasses 950 mi2 (square miles) in the highly urbanized area outside New York City, with a population of 2 million. Water quality in the basin is affected by many natural and anthropogenic factors. Nutrient loading to the Wanaque Reservoir in the northern part of the basin is of particular concern and is caused partly by the diversion of water at two downstream intakes that is transferred back upstream to refill the reservoir. The larger of these diversions, Wanaque South intake, is on the lower Pompton River near Two Bridges, New Jersey. To support the development of a Total Maximum Daily Load (TMDL) for nutrients in the nontidal part of the basin (805 mi2), a water-quality transport model was needed. The U.S. Geological Survey, in cooperation with the New Jersey Department of Environmental Protection and New Jersey EcoComplex, developed a flow-routing model to provide the hydraulic inputs to the water-quality model. The Diffusion Analogy Flow model (DAFLOW) described herein was designed for integration with the Water Quality Analysis Simulation Program (WASP) watershed water-quality model. The flow routing model was used to simulate flow in 108 miles of the Passaic River and major tributaries. Flow data from U.S. Geological Survey streamflow-gaging stations represent most of the model's upstream boundaries. Other model inputs include estimated flows for ungaged tributaries and unchanneled drainage along the mainstem, and reported flows for major point-source discharges and diversions. The former flows were calibrated using the drainage-area ratio method. The simulation extended over a 4+ year period representing a range in flow conditions. Simulated channel cross-sectional geometry in the DAFLOW model was calibrated using several different approaches by adjusting area and top width parameters. The model also was calibrated to observed flows for water year 2001 (low flow) at five mainstem gaging stations and one station at which flow was estimated. The model's target range was medium to low flows--the range of typical intake operations. Simulated flow mass balance, hydrographs (flood-wave speed, attenuation, and spread), flow-duration curves, and velocity and depth values were compared to observed counterparts. Mass balance and hydrograph fit were evaluated quantitatively. Simulation results generally were within the accuracy of the flow data at the measurement stations. The model was validated to observed flows for water years 2000 (average flow), 2002 (extreme low flow), and 2003 (high flow). Results for 19 of 20 comparisons indicate average mass-balance and model-fit errors of 6.6 and 15.7 percent, respectively, indicating that the model reasonably represents the time variation of streamflow in the nontidal Passaic River Basin. An algorithm (subroutine) also was developed for DAFLOW to simulate the hydraulic mixing that occurs near the Wanaque South intake upstream from the confluence of the Pompton and Passaic Rivers. The intake draws water from multiple sources, including effluent from a nearby wastewater-treatment plant, all of which have different phosphorus loads. The algorithm determines the proportion of flow from each source and operates within a narrow flow range. The equations used in the algorithm are based on the theory of diffusion and lateral mixing in rivers. Parameters used in the equations were estimated from limited available local flow and water-quality data. As expected, simulation results for water years 2000, 2001, and 2003 indicate that most of the water drawn to the intake comes from the Pompton River; however, during many short periods of low flow and high diversion, particularly in water year 2002, entrainment of the other flow sources compensated for the insufficient flow in the Pompton River. As additional verification of the flow model used in the water-quality model, a Branched Lagrangian Transport Model (B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbet Jr., Thomas F; Beyeler, Walter E; Vanwestrienen, Dirk
NetFlow Dynamics is a web-accessible analysis environment for simulating dynamic flows of materials on model networks. Performing a simulation requires both the NetFlow Dynamics application and a network model which is a description of the structure of the nodes and edges of a network including the flow capacity of each edge and the storage capacity of each node, and the sources and sinks of the material flowing on the network. NetFlow Dynamics consists of databases for storing network models, algorithms to calculate flows on networks, and a GIS-based graphical interface for performing simulations and viewing simulation results. Simulated flows aremore » dynamic in the sense that flows on each edge of the network and inventories at each node change with time and can be out of equilibrium with boundary conditions. Any number of network models could be simulated using Net Flow Dynamics. To date, the models simulated have been models of petroleum infrastructure. The main model has been the National Transportation Fuels Model (NTFM), a network of U.S. oil fields, transmission pipelines, rail lines, refineries, tank farms, and distribution terminals. NetFlow Dynamics supports two different flow algorithms, the Gradient Flow algorithm and the Inventory Control algorithm, that were developed specifically for the NetFlow Dynamics application. The intent is to add additional algorithms in the future as needed. The ability to select from multiple algorithms is desirable because a single algorithm never covers all analysis needs. The current algorithms use a demand-driven capacity-constrained formulation which means that the algorithms strive to use all available capacity and stored inventory to meet desired flows to sinks, subject to the capacity constraints of each network component. The current flow algorithms are best suited for problems in which a material flows on a capacity-constrained network representing a supply chain in which the material supplied can be stored at each node of the network. In the petroleum models, the flowing materials are crude oil and refined products that can be stored at tank farms, refineries, or terminals (i.e. the nodes of the network). Examples of other network models that could be simulated are currency flowing in a financial network, agricultural products moving to market, or natural gas flowing on a pipeline network.« less
Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.
Shen, Shijian; Nie, Xin; Zhang, Xinggan
2018-02-03
Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.
NASA Technical Reports Server (NTRS)
Millman, Daniel R.
2017-01-01
Air Data Systems (FADS) are becoming more prevalent on re-entry vehicles, as evi- denced by the Mars Science Laboratory and the Orion Multipurpose Crew Vehicle. A FADS consists of flush-mounted pressure transducers located at various locations on the fore-body of a flight vehicle or the heat shield of a re-entry capsule. A pressure model converts the pressure readings into useful air data quantities. Two algorithms for converting pressure readings to air data have become predominant- the iterative Least Squares State Estimator (LSSE) and the Triples Algorithm. What follows herein is a new algorithm that takes advantage of the best features of both the Triples Algorithm and the LSSE. This approach employs the potential flow model and strategic differencing of the Triples Algorithm to obtain the defective flight angles; however, the requirements on port placement are far less restrictive, allowing for configurations that are considered optimal for a FADS.
Modeling, Control, and Estimation of Flexible, Aerodynamic Structures
NASA Astrophysics Data System (ADS)
Ray, Cody W.
Engineers have long been inspired by nature’s flyers. Such animals navigate complex environments gracefully and efficiently by using a variety of evolutionary adaptations for high-performance flight. Biologists have discovered a variety of sensory adaptations that provide flow state feedback and allow flying animals to feel their way through flight. A specialized skeletal wing structure and plethora of robust, adaptable sensory systems together allow nature’s flyers to adapt to myriad flight conditions and regimes. In this work, motivated by biology and the successes of bio-inspired, engineered aerial vehicles, linear quadratic control of a flexible, morphing wing design is investigated, helping to pave the way for truly autonomous, mission-adaptive craft. The proposed control algorithm is demonstrated to morph a wing into desired positions. Furthermore, motivated specifically by the sensory adaptations organisms possess, this work transitions to an investigation of aircraft wing load identification using structural response as measured by distributed sensors. A novel, recursive estimation algorithm is utilized to recursively solve the inverse problem of load identification, providing both wing structural and aerodynamic states for use in a feedback control, mission-adaptive framework. The recursive load identification algorithm is demonstrated to provide accurate load estimate in both simulation and experiment.
Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva
1996-01-01
This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.
Optimal and robust control of transition
NASA Technical Reports Server (NTRS)
Bewley, T. R.; Agarwal, R.
1996-01-01
Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.
Using flow information to support 3D vessel reconstruction from rotational angiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waechter, Irina; Bredno, Joerg; Weese, Juergen
2008-07-15
For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here,more » we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute error of the 3D centerline was 0.45 mm. Under pulsatile and nonpulsatile conditions, flow information can be used to enable a 3D vessel reconstruction from rotational angiography with inflow and possibly outflow of contrast agent. We found that the most important parameter for the quality of the reconstruction of centerline and radii is the range through which the x-ray system rotates in the time span of the injection. Good results were obtained if this range was at least 135 deg. . As a standard c-arm can rotate 205 deg., typically one third of the acquisition can show inflow or outflow of contrast agent, which is required for the quantification of blood flow from rotational angiography.« less
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
Concepts on tracking the impact of tropical cyclones through the coastal zone
NASA Astrophysics Data System (ADS)
Syvitski, J. P.; Hannon, M. T.; Kettner, A. J.; Bachman, S.
2009-12-01
WAVEWATCH III™ (Tolman, 2009) models the evolution of wind wave spectra under influence of wind, breaking, nonlinear interactions, bottom interaction (including shoaling and refraction), currents, water level changes and ice concentrations. The NOAA/NCEP data system offers global estimates every 3 hr at 1° x 1.25° for wind speed and direction at 10m asl, wave direction, height, and period. These and other derived parameters are useful in characterizing wave conditions as tropical cyclones approach landfall. The Tropical Rainfall Measuring Mission or TRMM based precipitation estimates a global 0.25° x 0.25° grid between 50° N-S produced within ≈7 hours of observation time. Estimates are derived from the Passive Microwave Radiometer, Precipitation Radar, and Visible-Infrared Scanner), plus data from: i) SSM/I ii) low-orbit GOES IR and TIROS Operational Vertical Sounder, iii) AMSR-E, iv) AMSU-B, and v) rain gauge data run through algorithm 3B-43. Data are served by the Goddard Distributed Active Archive Center. Evapotranspiration estimates are from the MODIS ET (MOD16) algorithm developed by Mu et al. (2007), based on the Penman-Monteith equation, modified with satellite information that uses: (1) vapor pressure deficit and minimum air temperature constraints on stomatal conductance; (2) leaf area index as a scalar for estimating canopy conductance; (3) the Enhanced Vegetation Index; and (4) a calculation of soil evaporation. TopoFlow is a spatially distributed hydrologic model able to ingest the TRMM and EV data through a suite of hydrologic processes (e.g. snowmelt, precipitation, evapotranspiration, infiltration, channel and overland flow, shallow subsurface flow, and flow diversions) to evolve in time in response to climatic forcings. Modeled or gauged discharge can then be coupled to sediment flux models to provide factor of 2 estimates of sediment flux (Syvitski et al. 2007, Kettner et al. 2008, Syvitski and Milliman 2007). The MODIS satellite constellation can track storm fronts and tropical cyclones and sense sediment discharged, resuspension of shoreline sediment, and be used to observe the dimensions and dynamics of delta flooding and delta-plain aggradation (Syvitski et al. 2009). An integrated workflow involving these models and data system will be presented outlining their use in characterizing sediment flux within the coastal zone.
A nowcasting technique based on application of the particle filter blending algorithm
NASA Astrophysics Data System (ADS)
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
NASA Astrophysics Data System (ADS)
Engelhardt, Sandy; Kolb, Silvio; De Simone, Raffaele; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo
2016-03-01
Mitral valve annuloplasty describes a surgical procedure where an artificial prosthesis is sutured onto the anatomical structure of the mitral annulus to re-establish the valve's functionality. Choosing an appropriate commercially available ring size and shape is a difficult decision the surgeon has to make intraoperatively according to his experience. In our augmented-reality framework, digitalized ring models are superimposed onto endoscopic image streams without using any additional hardware. To place the ring model on the proper position within the endoscopic image plane, a pose estimation is performed that depends on the localization of sutures placed by the surgeon around the leaflet origins and punctured through the stiffer structure of the annulus. In this work, the tissue penetration points are tracked by the real-time capable Lucas Kanade optical flow algorithm. The accuracy and robustness of this tracking algorithm is investigated with respect to the question whether outliers influence the subsequent pose estimation. Our results suggest that optical flow is very stable for a variety of different endoscopic scenes and tracking errors do not affect the position of the superimposed virtual objects in the scene, making this approach a viable candidate for annuloplasty augmented reality-enhanced decision support.
Yoo, Do Guen; Lee, Ho Min; Sadollah, Ali; Kim, Joong Hoon
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply.
Lee, Ho Min; Sadollah, Ali
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply. PMID:25874252
NASA Astrophysics Data System (ADS)
Huebner, Claudia S.
2016-10-01
As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).
H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows
NASA Technical Reports Server (NTRS)
Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.
1993-01-01
The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).
A biologically inspired network design model.
Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T S; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I; Sirakoulis, Georgios Ch; Mahadevan, Sankaran
2015-06-04
A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach.
A Biologically Inspired Network Design Model
Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T.S.; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I.; Sirakoulis, Georgios Ch.; Mahadevan, Sankaran
2015-01-01
A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach. PMID:26041508
Sobel, E.; Lange, K.
1996-01-01
The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310
NASA Technical Reports Server (NTRS)
Srivastava, Prashant K.; Han, Dawei; Rico-Ramirez, Miguel A.; O'Neill, Peggy; Islam, Tanvir; Gupta, Manika
2014-01-01
Soil Moisture and Ocean Salinity (SMOS) is the latest mission which provides flow of coarse resolution soil moisture data for land applications. However, the efficient retrieval of soil moisture for hydrological applications depends on optimally choosing the soil and vegetation parameters. The first stage of this work involves the evaluation of SMOS Level 2 products and then several approaches for soil moisture retrieval from SMOS brightness temperature are performed to estimate Soil Moisture Deficit (SMD). The most widely applied algorithm i.e. Single channel algorithm (SCA), based on tau-omega is used in this study for the soil moisture retrieval. In tau-omega, the soil moisture is retrieved using the Horizontal (H) polarisation following Hallikainen dielectric model, roughness parameters, Fresnel's equation and estimated Vegetation Optical Depth (tau). The roughness parameters are empirically calibrated using the numerical optimization techniques. Further to explore the improvement in retrieval models, modifications have been incorporated in the algorithms with respect to the sources of the parameters, which include effective temperatures derived from the European Center for Medium-Range Weather Forecasts (ECMWF) downscaled using the Weather Research and Forecasting (WRF)-NOAH Land Surface Model and Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) while the s is derived from MODIS Leaf Area Index (LAI). All the evaluations are performed against SMD, which is estimated using the Probability Distributed Model following a careful calibration and validation integrated with sensitivity and uncertainty analysis. The performance obtained after all those changes indicate that SCA-H using WRF-NOAH LSM downscaled ECMWF LST produces an improved performance for SMD estimation at a catchment scale.
3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows
NASA Astrophysics Data System (ADS)
Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.
2016-02-01
Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.
NASA Astrophysics Data System (ADS)
Evans, M. N.; Selmer, K. J.; Breeden, B. T.; Lopatka, A. S.; Plummer, R. E.
2016-09-01
We describe an algorithm to correct for scale compression, runtime drift, and amplitude effects in carbonate and cellulose oxygen and carbon isotopic analyses made on two online continuous flow isotope ratio mass spectrometry (CF-IRMS) systems using gas chromatographic (GC) separation. We validate the algorithm by correcting measurements of samples of known isotopic composition which are not used to estimate the corrections. For carbonate δ13C (δ18O) data, median precision of validation estimates for two reference materials and two calibrated working standards is 0.05‰ (0.07‰); median bias is 0.04‰ (0.02‰) over a range of 49.2‰ (24.3‰). For α-cellulose δ13C (δ18O) data, median precision of validation estimates for one reference material and five working standards is 0.11‰ (0.27‰); median bias is 0.13‰ (-0.10‰) over a range of 16.1‰ (19.1‰). These results are within the 5th-95th percentile range of subsequent routine runtime validation exercises in which one working standard is used to calibrate the other. Analysis of the relative importance of correction steps suggests that drift and scale-compression corrections are most reliable and valuable. If validation precisions are not already small, routine cross-validated precision estimates are improved by up to 50% (80%). The results suggest that correction for systematic error may enable these particular CF-IRMS systems to produce δ13C and δ18O carbonate and cellulose isotopic analyses with higher validated precision, accuracy, and throughput than is typically reported for these systems. The correction scheme may be used in support of replication-intensive research projects in paleoclimatology and other data-intensive applications within the geosciences.
An optical flow-based state-space model of the vocal folds.
Granados, Alba; Brunskog, Jonas
2017-06-01
High-speed movies of the vocal fold vibration are valuable data to reveal vocal fold features for voice pathology diagnosis. This work presents a suitable Bayesian model and a purely theoretical discussion for further development of a framework for continuum biomechanical features estimation. A linear and Gaussian nonstationary state-space model is proposed and thoroughly discussed. The evolution model is based on a self-sustained three-dimensional finite element model of the vocal folds, and the observation model involves a dense optical flow algorithm. The results show that the method is able to capture different deformation patterns between the computed optical flow and the finite element deformation, controlled by the choice of the model tissue parameters.
Research on Flow Field Perception Based on Artificial Lateral Line Sensor System.
Liu, Guijie; Wang, Mengmeng; Wang, Anyi; Wang, Shirui; Yang, Tingting; Malekian, Reza; Li, Zhixiong
2018-03-11
In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line system composed of micro-pressure sensors. Through hydrodynamic simulations, an optimized sensor structure was obtained and the pressure distribution models of the lateral surface were established in uniform flow and turbulent flow. Carrying out the corresponding underwater experiment, the validity of the numerical simulation method is verified by the comparison between the experimental data and the simulation results. In addition, a variety of effective research methods are proposed and validated for the flow velocity estimation and attitude perception in turbulent flow, respectively and the shape recognition of obstacles is realized by the neural network algorithm.
Two-component wind fields over ocean waves using atmospheric lidar and motion estimation algorithms
NASA Astrophysics Data System (ADS)
Mayor, S. D.
2016-02-01
Numerical models, such as large eddy simulations, are capable of providing stunning visualizations of the air-sea interface. One reason for this is the inherent spatial nature of such models. As compute power grows, models are able to provide higher resolution visualizations over larger domains revealing intricate details of the interactions of ocean waves and the airflow over them. Spatial observations on the other hand, which are necessary to validate the simulations, appear to lag behind models. The rough ocean environment of the real world is an additional challenge. One method of providing spatial observations of fluid flow is that of particle image velocimetry (PIV). PIV has been successfully applied to many problems in engineering and the geosciences. This presentation will show recent research results that demonstate that a PIV-style approach using pulsed-fiber atmospheric elastic backscatter lidar hardware and wavelet-based optical flow motion estimation software can reveal two-component wind fields over rough ocean surfaces. Namely, a recently-developed compact lidar was deployed for 10 days in March of 2015 in the Eureka, California area. It scanned over the ocean. Imagery reveal that breaking ocean waves provide copius amounts of particulate matter for the lidar to detect and for the motion estimation algorithms to retrieve wind vectors from. The image below shows two examples of results from the experiment. The left panel shows the elastic backscatter intensity (copper shades) under a field of vectors that was retrieved by the wavelet-based optical flow algorithm from two scans that took about 15 s each to acquire. The vectors, that reveal offshore flow toward the NW, were decimated for clarity. The bright aerosol features along the right edge of the sector scan were caused by ocean waves breaking on the beach. The right panel is the result of scanning over the ocean on a day when wave amplitudes ranged from 8-12 feet and whitecaps offshore beyond the surf zone appeared to be rare and fleeting. Nonetheless, faint coherent aerosol structures are observable in the backscatter field as long, streaky, wind-parallel filaments and a wind field was retrieved. During the 10-day deployment, the seas were not as rough as expected. A current goal is to find collaborators and return to map airflow in rougher conditions.
Analysis of a simulation algorithm for direct brain drug delivery
Rosenbluth, Kathryn Hammond; Eschermann, Jan Felix; Mittermeyer, Gabriele; Thomson, Rowena; Mittermeyer, Stephan; Bankiewicz, Krystof S.
2011-01-01
Convection enhanced delivery (CED) achieves targeted delivery of drugs with a pressure-driven infusion through a cannula placed stereotactically in the brain. This technique bypasses the blood brain barrier and gives precise distributions of drugs, minimizing off-target effects of compounds such as viral vectors for gene therapy or toxic chemotherapy agents. The exact distribution is affected by the cannula positioning, flow rate and underlying tissue structure. This study presents an analysis of a simulation algorithm for predicting the distribution using baseline MRI images acquired prior to inserting the cannula. The MRI images included diffusion tensor imaging (DTI) to estimate the tissue properties. The algorithm was adapted for the devices and protocols identified for upcoming trials and validated with direct MRI visualization of Gadolinium in 20 infusions in non-human primates. We found strong agreement between the size and location of the simulated and gadolinium volumes, demonstrating the clinical utility of this surgical planning algorithm. PMID:21945468
A power autonomous monopedal robot
NASA Astrophysics Data System (ADS)
Krupp, Benjamin T.; Pratt, Jerry E.
2006-05-01
We present the design and initial results of a power-autonomous planar monopedal robot. The robot is a gasoline powered, two degree of freedom robot that runs in a circle, constrained by a boom. The robot uses hydraulic Series Elastic Actuators, force-controllable actuators which provide high force fidelity, moderate bandwidth, and low impedance. The actuators are mounted in the body of the robot, with cable drives transmitting power to the hip and knee joints of the leg. A two-stroke, gasoline engine drives a constant displacement pump which pressurizes an accumulator. Absolute position and spring deflection of each of the Series Elastic Actuators are measured using linear encoders. The spring deflection is translated into force output and compared to desired force in a closed loop force-control algorithm implemented in software. The output signal of each force controller drives high performance servo valves which control flow to each of the pistons of the actuators. In designing the robot, we used a simulation-based iterative design approach. Preliminary estimates of the robot's physical parameters were based on past experience and used to create a physically realistic simulation model of the robot. Next, a control algorithm was implemented in simulation to produce planar hopping. Using the joint power requirements and range of motions from simulation, we worked backward specifying pulley diameter, piston diameter and stroke, hydraulic pressure and flow, servo valve flow and bandwidth, gear pump flow, and engine power requirements. Components that meet or exceed these specifications were chosen and integrated into the robot design. Using CAD software, we calculated the physical parameters of the robot design, replaced the original estimates with the CAD estimates, and produced new joint power requirements. We iterated on this process, resulting in a design which was prototyped and tested. The Monopod currently runs at approximately 1.2 m/s with the weight of all the power generating components, but powered from an off-board pump. On a test stand, the eventual on-board power system generates enough pressure and flow to meet the requirements of these runs and we are currently integrating the power system into the real robot. When operated from an off-board system without carrying the weight of the power generating components, the robot currently runs at approximately 2.25 m/s. Ongoing work is focused on integrating the power system into the robot, improving the control algorithm, and investigating methods for improving efficiency.
Sando, Steven K.; McCarthy, Peter M.
2018-05-10
This report documents the methods for peak-flow frequency (hereinafter “frequency”) analysis and reporting for streamgages in and near Montana following implementation of the Bulletin 17C guidelines. The methods are used to provide estimates of peak-flow quantiles for 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for selected streamgages operated by the U.S. Geological Survey Wyoming-Montana Water Science Center (WY–MT WSC). These annual exceedance probabilities correspond to 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Standard procedures specific to the WY–MT WSC for implementing the Bulletin 17C guidelines include (1) the use of the Expected Moments Algorithm analysis for fitting the log-Pearson Type III distribution, incorporating historical information where applicable; (2) the use of weighted skew coefficients (based on weighting at-site station skew coefficients with generalized skew coefficients from the Bulletin 17B national skew map); and (3) the use of the Multiple Grubbs-Beck Test for identifying potentially influential low flows. For some streamgages, the peak-flow records are not well represented by the standard procedures and require user-specified adjustments informed by hydrologic judgement. The specific characteristics of peak-flow records addressed by the informed-user adjustments include (1) regulated peak-flow records, (2) atypical upper-tail peak-flow records, and (3) atypical lower-tail peak-flow records. In all cases, the informed-user adjustments use the Expected Moments Algorithm fit of the log-Pearson Type III distribution using the at-site station skew coefficient, a manual potentially influential low flow threshold, or both.Appropriate methods can be applied to at-site frequency estimates to provide improved representation of long-term hydroclimatic conditions. The methods for improving at-site frequency estimates by weighting with regional regression equations and by Maintenance of Variance Extension Type III record extension are described.Frequency analyses were conducted for 99 example streamgages to indicate various aspects of the frequency-analysis methods described in this report. The frequency analyses and results for the example streamgages are presented in a separate data release associated with this report consisting of tables and graphical plots that are structured to include information concerning the interpretive decisions involved in the frequency analyses. Further, the separate data release includes the input files to the PeakFQ program, version 7.1, including the peak-flow data file and the analysis specification file that were used in the peak-flow frequency analyses. Peak-flow frequencies are also reported in separate data releases for selected streamgages in the Beaverhead River and Clark Fork Basins and also for selected streamgages in the Ruby, Jefferson, and Madison River Basins.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
Daily values flow comparison and estimates using program HYCOMP, version 1.0
Sanders, Curtis L.
2002-01-01
A method used by the U.S. Geological Survey for quality control in computing daily value flow records is to compare hydrographs of computed flows at a station under review to hydrographs of computed flows at a selected index station. The hydrographs are placed on top of each other (as hydrograph overlays) on a light table, compared, and missing daily flow data estimated. This method, however, is subjective and can produce inconsistent results, because hydrographers can differ when calculating acceptable limits of deviation between observed and estimated flows. Selection of appropriate index stations also is judgemental, giving no consideration to the mathematical correlation between the review station and the index station(s). To address the limitation of the hydrograph overlay method, a set of software programs, written in the SAS macrolanguage, was developed and designated Program HYDCOMP. The program automatically selects statistically comparable index stations by correlation and regression, and performs hydrographic comparisons and estimates of missing data by regressing daily mean flows at the review station against -8 to +8 lagged flows at one or two index stations and day-of-week. Another advantage that HYDCOMP has over the graphical method is that estimated flows, the criteria for determining the quality of the data, and the selection of index stations are determined statistically, and are reproducible from one user to another. HYDCOMP will load the most-correlated index stations into another file containing the ?best index stations,? but will not overwrite stations already in the file. A knowledgeable user should delete unsuitable index stations from this file based on standard error of estimate, hydrologic similarity of candidate index stations to the review station, and knowledge of the individual station characteristics. Also, the user can add index stations not selected by HYDCOMP, if desired. Once the file of best-index stations is created, a user may do hydrographic comparison and data estimates by entering the number of the review station, selecting an index station, and specifying the periods to be used for regression and plotting. For example, the user can restrict the regression to ice-free periods of the year to exclude flows estimated during iced conditions. However, the regression could still be used to estimate flow during iced conditions. HYDCOMP produces the standard error of estimate as a measure of the central scatter of the regression and R-square (coefficient of determination) for evaluating the accuracy of the regression. Output from HYDCOMP includes plots of percent residuals against (1) time within the regression and plot periods, (2) month and day of the year for evaluating seasonal bias in the regression, and (3) the magnitude of flow. For hydrographic comparisons, it plots 2-month segments of hydrographs over the selected plot period showing the observed flows, the regressed flows, the 95 percent confidence limit flows, flow measurements, and regression limits. If the observed flows at the review station remain outside the 95 percent confidence limits for a prolonged period, there may be some error in the flows at the review station or at the index station(s). In addition, daily minimum and maximum temperatures and daily rainfall are shown on the hydrographs, if available, to help indicate whether an apparent change in flow may result from rainfall or from changes in backwater from melting ice or freezing water. HYDCOMP statistically smooths estimated flows from non-missing flows at the edges of the gaps in data into regressed flows at the center of the gaps using the Kalman smoothing algorithm. Missing flows are automatically estimated by HYDCOMP, but the user also can specify that periods of erroneous, but nonmissing flows, be estimated by the program.
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.
Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W
2009-03-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.
A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics
Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.
2009-01-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007
Assessment of computational prediction of tail buffeting
NASA Technical Reports Server (NTRS)
Edwards, John W.
1990-01-01
Assessments of the viability of computational methods and the computer resource requirements for the prediction of tail buffeting are made. Issues involved in the use of Euler and Navier-Stokes equations in modeling vortex-dominated and buffet flows are discussed and the requirement for sufficient grid density to allow accurate, converged calculations is stressed. Areas in need of basic fluid dynamics research are highlighted: vorticity convection, vortex breakdown, dynamic turbulence modeling for free shear layers, unsteady flow separation for moderately swept, rounded leading-edge wings, vortex flows about wings at high subsonic speeds. An estimate of the computer run time for a buffeting response calculation for a full span F-15 aircraft indicates that an improvement in computer and/or algorithm efficiency of three orders of magnitude is needed to enable routine use of such methods. Attention is also drawn to significant uncertainties in the estimates, in particular with regard to nonlinearities contained within the modeling and the question of the repeatability or randomness of buffeting response.
Full-order optimal compensators for flow control: the multiple inputs case
NASA Astrophysics Data System (ADS)
Semeraro, Onofrio; Pralits, Jan O.
2018-03-01
Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.
Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget
NASA Astrophysics Data System (ADS)
Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.
2012-12-01
The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.
Non-iterative double-frame 2D/3D particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.
2017-09-01
In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.
Inferring Aquifer Transmissivity from River Flow Data
NASA Astrophysics Data System (ADS)
Trichakis, Ioannis; Pistocchi, Alberto
2016-04-01
Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.
Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.
Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian
2009-10-01
In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
Tuning-free controller to accurately regulate flow rates in a microfluidic network
NASA Astrophysics Data System (ADS)
Heo, Young Jin; Kang, Junsu; Kim, Min Jun; Chung, Wan Kyun
2016-03-01
We describe a control algorithm that can improve accuracy and stability of flow regulation in a microfluidic network that uses a conventional pressure pump system. The algorithm enables simultaneous and independent control of fluid flows in multiple micro-channels of a microfluidic network, but does not require any model parameters or tuning process. We investigate robustness and optimality of the proposed control algorithm and those are verified by simulations and experiments. In addition, the control algorithm is compared with a conventional PID controller to show that the proposed control algorithm resolves critical problems induced by the PID control. The capability of the control algorithm can be used not only in high-precision flow regulation in the presence of disturbance, but in some useful functions for lab-on-a-chip devices such as regulation of volumetric flow rate, interface position control of two laminar flows, valveless flow switching, droplet generation and particle manipulation. We demonstrate those functions and also suggest further potential biological applications which can be accomplished by the proposed control framework.
Tuning-free controller to accurately regulate flow rates in a microfluidic network
Heo, Young Jin; Kang, Junsu; Kim, Min Jun; Chung, Wan Kyun
2016-01-01
We describe a control algorithm that can improve accuracy and stability of flow regulation in a microfluidic network that uses a conventional pressure pump system. The algorithm enables simultaneous and independent control of fluid flows in multiple micro-channels of a microfluidic network, but does not require any model parameters or tuning process. We investigate robustness and optimality of the proposed control algorithm and those are verified by simulations and experiments. In addition, the control algorithm is compared with a conventional PID controller to show that the proposed control algorithm resolves critical problems induced by the PID control. The capability of the control algorithm can be used not only in high-precision flow regulation in the presence of disturbance, but in some useful functions for lab-on-a-chip devices such as regulation of volumetric flow rate, interface position control of two laminar flows, valveless flow switching, droplet generation and particle manipulation. We demonstrate those functions and also suggest further potential biological applications which can be accomplished by the proposed control framework. PMID:26987587
Sakuno, Yuji; Miño, Esteban R; Nakai, Satoshi; Mutsuda, Hidemi; Okuda, Tetsuji; Nishijima, Wataru; Castro, Rolando; García, Amarillis; Peña, Rosanna; Rodríguez, Marcos; Depratt, G Conrado
2014-07-01
This study aims to study the distribution of contaminants in rivers that flow into the Caribbean Sea using chlorophyll-a (Chl-a) and suspended sediment (SS) as markers and ALOS AVNIR-2 satellite sensor data. The Haina River (HN) and Ozama and Isabela Rivers (OZ-IS) that flow through the city of Santo Domingo, the capital of the Dominican Republic, were chosen. First, in situ spectral reflectance/Chl-a and SS datasets obtained from these rivers were acquired in March 2011 (case A: with no rain influence) and June 2011 (case B: with rain influence), and the estimation algorithm of Chl-a and SS using AVNIR-2 data was developed from the datasets. Moreover, the developed algorithm was applied to AVNIR-2 data in November 2010 for case A and August 2010 for case B. Results revealed that for Chl-a and SS estimations under cases A and B conditions, the reflectance ratio of AVNIR-2 band 4 and band 3 (AV4/AV3) and the reflectance of AVNIR-2 band 4 (AV4) were effective. The Chl-a and SS mapping results obtained using AVNIR-2 data corresponded with the field survey results. Finally, an outline of the distribution of contaminants at the mouth of the river that flows into the Caribbean Sea was obtained for both rivers in cases A and B.
A second-order accurate parabolized Navier-Stokes algorithm for internal flows
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1984-01-01
A parabolized implicit Navier-Stokes algorithm which is of second-order accuracy in both the cross flow and marching directions is presented. The algorithm is used to analyze three model supersonic flow problems (the flow over a 10-degree edge). The results are found to be in good agreement with the results of other techniques available in the literature.
Active heat pulse sensing of 3-D-flow fields in streambeds
NASA Astrophysics Data System (ADS)
Banks, Eddie W.; Shanafield, Margaret A.; Noorduijn, Saskia; McCallum, James; Lewandowski, Jörg; Batelaan, Okke
2018-03-01
Profiles of temperature time series are commonly used to determine hyporheic flow patterns and hydraulic dynamics in the streambed sediments. Although hyporheic flows are 3-D, past research has focused on determining the magnitude of the vertical flow component and how this varies spatially. This study used a portable 56-sensor, 3-D temperature array with three heat pulse sources to measure the flow direction and magnitude up to 200 mm below the water-sediment interface. Short, 1 min heat pulses were injected at one of the three heat sources and the temperature response was monitored over a period of 30 min. Breakthrough curves from each of the sensors were analysed using a heat transport equation. Parameter estimation and uncertainty analysis was undertaken using the differential evolution adaptive metropolis (DREAM) algorithm, an adaption of the Markov chain Monte Carlo method, to estimate the flux and its orientation. Measurements were conducted in the field and in a sand tank under an extensive range of controlled hydraulic conditions to validate the method. The use of short-duration heat pulses provided a rapid, accurate assessment technique for determining dynamic and multi-directional flow patterns in the hyporheic zone and is a basis for improved understanding of biogeochemical processes at the water-streambed interface.
Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition
NASA Astrophysics Data System (ADS)
Kesrarat, Darun; Patanavijit, Vorapoj
2017-02-01
In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).
Identification of PARMA Models and Their Application to the Modeling of River flows
NASA Astrophysics Data System (ADS)
Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.
2004-05-01
The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is essential to the planning, design and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. We employ a periodic ARMA (PARMA) model. The innovation algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Anderson and Meerschaert (2003) provide a method for model identification when the time series has finite fourth moment. This article, an extension of the previous work by Anderson and Meerschaert, demonstrates the effectiveness of the technique using simulated data. An application to monthly flow data for the Frazier River in British Columbia is also included to illustrate the use of these methods.
Research on Flow Field Perception Based on Artificial Lateral Line Sensor System
Wang, Anyi; Wang, Shirui; Yang, Tingting
2018-01-01
In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line system composed of micro-pressure sensors. Through hydrodynamic simulations, an optimized sensor structure was obtained and the pressure distribution models of the lateral surface were established in uniform flow and turbulent flow. Carrying out the corresponding underwater experiment, the validity of the numerical simulation method is verified by the comparison between the experimental data and the simulation results. In addition, a variety of effective research methods are proposed and validated for the flow velocity estimation and attitude perception in turbulent flow, respectively and the shape recognition of obstacles is realized by the neural network algorithm. PMID:29534499
NASA Astrophysics Data System (ADS)
Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela
2015-10-01
An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.
Smart algorithms and adaptive methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Tinsley Oden, J.
1989-05-01
A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.
Ultrasonic technique for imaging tissue vibrations: preliminary results.
Sikdar, Siddhartha; Beach, Kirk W; Vaezy, Shahram; Kim, Yongmin
2005-02-01
We propose an ultrasound (US)-based technique for imaging vibrations in the blood vessel walls and surrounding tissue caused by eddies produced during flow through narrowed or punctured arteries. Our approach is to utilize the clutter signal, normally suppressed in conventional color flow imaging, to detect and characterize local tissue vibrations. We demonstrate the feasibility of visualizing the origin and extent of vibrations relative to the underlying anatomy and blood flow in real-time and their quantitative assessment, including measurements of the amplitude, frequency and spatial distribution. We present two signal-processing algorithms, one based on phase decomposition and the other based on spectral estimation using eigen decomposition for isolating vibrations from clutter, blood flow and noise using an ensemble of US echoes. In simulation studies, the computationally efficient phase-decomposition method achieved 96% sensitivity and 98% specificity for vibration detection and was robust to broadband vibrations. Somewhat higher sensitivity (98%) and specificity (99%) could be achieved using the more computationally intensive eigen decomposition-based algorithm. Vibration amplitudes as low as 1 mum were measured accurately in phantom experiments. Real-time tissue vibration imaging at typical color-flow frame rates was implemented on a software-programmable US system. Vibrations were studied in vivo in a stenosed femoral bypass vein graft in a human subject and in a punctured femoral artery and incised spleen in an animal model.
NASA Astrophysics Data System (ADS)
Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang
2018-05-01
Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.
McGinn, Patrick J; MacQuarrie, Scott P; Choi, Jerome; Tartakovsky, Boris
2017-01-01
In this study, production of the microalga Scenedesmus AMDD in a 300 L continuous flow photobioreactor was maximized using an online flow (dilution rate) control algorithm. To enable online control, biomass concentration was estimated in real time by measuring chlorophyll-related culture fluorescence. A simple microalgae growth model was developed and used to solve the optimization problem aimed at maximizing the photobioreactor productivity. When optimally controlled, Scenedesmus AMDD culture demonstrated an average volumetric biomass productivity of 0.11 g L -1 d -1 over a 25 day cultivation period, equivalent to a 70 % performance improvement compared to the same photobioreactor operated as a turbidostat. The proposed approach for optimizing photobioreactor flow can be adapted to a broad range of microalgae cultivation systems.
Time-derivative preconditioning for viscous flows
NASA Technical Reports Server (NTRS)
Choi, Yunho; Merkle, Charles L.
1991-01-01
A time-derivative preconditioning algorithm that is effective over a wide range of flow conditions from inviscid to very diffusive flows and from low speed to supersonic flows was developed. This algorithm uses a viscous set of primary dependent variables to introduce well-conditioned eigenvalues and to avoid having a nonphysical time reversal for viscous flow. The resulting algorithm also provides a mechanism for controlling the inviscid and viscous time step parameters to be of order one for very diffusive flows, thereby ensuring rapid convergence at very viscous flows as well as for inviscid flows. Convergence capabilities are demonstrated through computation of a wide variety of problems.
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
Design and Calibration of the X-33 Flush Airdata Sensing (FADS) System
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Cobleigh, Brent R.; Haering, Edward A.
1998-01-01
This paper presents the design of the X-33 Flush Airdata Sensing (FADS) system. The X-33 FADS uses a matrix of pressure orifices on the vehicle nose to estimate airdata parameters. The system is designed with dual-redundant measurement hardware, which produces two independent measurement paths. Airdata parameters that correspond to the measurement path with the minimum fit error are selected as the output values. This method enables a single sensor failure to occur with minimal degrading of the system performance. The paper shows the X-33 FADS architecture, derives the estimating algorithms, and demonstrates a mathematical analysis of the FADS system stability. Preliminary aerodynamic calibrations are also presented here. The calibration parameters, the position error coefficient (epsilon), and flow correction terms for the angle of attack (delta alpha), and angle of sideslip (delta beta) are derived from wind tunnel data. Statistical accuracy of' the calibration is evaluated by comparing the wind tunnel reference conditions to the airdata parameters estimated. This comparison is accomplished by applying the calibrated FADS algorithm to the sensed wind tunnel pressures. When the resulting accuracy estimates are compared to accuracy requirements for the X-33 airdata, the FADS system meets these requirements.
Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed
NASA Technical Reports Server (NTRS)
Tian, Ye; Song, Qi; Cattafesta, Louis
2005-01-01
This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.
Smart Grid Integrity Attacks: Characterizations and Countermeasures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annarita Giani; Eilyan Bitar; Miles McQueen
2011-10-01
Real power injections at loads and generators, and real power flows on selected lines in a transmission network are monitored, transmitted over a SCADA network to the system operator, and used in state estimation algorithms to make dispatch, re-balance and other energy management system [EMS] decisions. Coordinated cyber attacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm. These unobservable attacks present a serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacksmore » [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of power meters on lines is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known secure phase measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyber attacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyber attacks.« less
NASA Astrophysics Data System (ADS)
Kardhana, Hadi; Arya, Doni Khaira; Hadihardaja, Iwan K.; Widyaningtyas, Riawan, Edi; Lubis, Atika
2017-11-01
Small-Scale Hydropower (SHP) had been important electric energy power source in Indonesia. Indonesia is vast countries, consists of more than 17.000 islands. It has large fresh water resource about 3 m of rainfall and 2 m of runoff. Much of its topography is mountainous, remote but abundant with potential energy. Millions of people do not have sufficient access to electricity, some live in the remote places. Recently, SHP development was encouraged for energy supply of the places. Development of global hydrology data provides opportunity to predict distribution of hydropower potential. In this paper, we demonstrate run-of-river type SHP spot prediction tool using SWAT and a river diversion algorithm. The use of Soil and Water Assessment Tool (SWAT) with input of CFSR (Climate Forecast System Re-analysis) of 10 years period had been implemented to predict spatially distributed flow cumulative distribution function (CDF). A simple algorithm to maximize potential head of a location by a river diversion expressing head race and penstock had been applied. Firm flow and power of the SHP were estimated from the CDF and the algorithm. The tool applied to Upper Citarum River Basin and three out of four existing hydropower locations had been well predicted. The result implies that this tool is able to support acceleration of SHP development at earlier phase.
NASA Astrophysics Data System (ADS)
Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.
2015-12-01
The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.
Cubarsi, R; Carrió, M M; Villaverde, A
2005-09-01
The in vivo proteolytic digestion of bacterial inclusion bodies (IBs) and the kinetic analysis of the resulting protein fragments is an interesting approach to investigate the molecular organization of these unconventional protein aggregates. In this work, we describe a set of mathematical instruments useful for such analysis and interpretation of observed data. These methods combine numerical estimation of digestion rate and approximation of its high-order derivatives, modelling of fragmentation events from a mixture of Poisson processes associated with differentiated protein species, differential equations techniques in order to estimate the mixture parameters, an iterative predictor-corrector algorithm for describing the flow diagram along the cascade process, as well as least squares procedures with minimum variance estimates. The models are formulated and compared with data, and successively refined to better match experimental observations. By applying such procedures as well as newer improved algorithms of formerly developed equations, it has been possible to model, for two kinds of bacterially produced aggregation prone recombinant proteins, their cascade digestion process that has revealed intriguing features of the IB-forming polypeptides.
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220
Discrete bat algorithm for optimal problem of permutation flow shop scheduling.
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.
NASA Astrophysics Data System (ADS)
Burns, W. Robert
Since the early 1970's research in airborne laser systems has been the subject of continued interest. Airborne laser applications depend on being able to propagate a near diffraction-limited laser beam from an airborne platform. Turbulent air flowing over the aircraft produces density fluctuations through which the beam must propagate. Because the index of refraction of the air is directly related to the density, the turbulent flow imposes aberrations on the beam passing through it. This problem is referred to as Aero-Optics. Aero-Optics is recognized as a major technical issue that needs to be solved before airborne optical systems can become routinely fielded. This dissertation research specifically addresses an approach to mitigating the deleterious effects imposed on an airborne optical system by aero-optics. A promising technology is adaptive optics: a feedback control method that measures optical aberrations and imprints the conjugate aberrations onto an outgoing beam. The challenge is that it is a computationally-difficult problem, since aero-optic disturbances are on the order of kilohertz for practical applications. High control loop frequencies and high disturbance frequencies mean that adaptive-optic systems are sensitive to latency in sensors, mirrors, amplifiers, and computation. These latencies build up to result in a dramatic reduction in the system's effective bandwidth. This work presents two variations of an algorithm that uses model reduction and data-driven predictors to estimate the evolution of measured wavefronts over a short temporal horizon and thus compensate for feedback latency. The efficacy of the two methods are compared in this research, and evaluated against similar algorithms that have been previously developed. The best version achieved over 75% disturbance rejection in simulation in the most optically active flow region in the wake of a turret, considerably outperforming conventional approaches. The algorithm is shown to be insensitive to changes in flow condition, and stable in the presence of small latency uncertainty. Consideration is given to practical implementation of the algorithms as well as computational requirement scaling.
Inference in the brain: Statistics flowing in redundant population codes
Pitkow, Xaq; Angelaki, Dora E
2017-01-01
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050
Assimilating Eulerian and Lagrangian data in traffic-flow models
NASA Astrophysics Data System (ADS)
Xia, Chao; Cochrane, Courtney; DeGuire, Joseph; Fan, Gaoyang; Holmes, Emma; McGuirl, Melissa; Murphy, Patrick; Palmer, Jenna; Carter, Paul; Slivinski, Laura; Sandstede, Björn
2017-05-01
Data assimilation of traffic flow remains a challenging problem. One difficulty is that data come from different sources ranging from stationary sensors and camera data to GPS and cell phone data from moving cars. Sensors and cameras give information about traffic density, while GPS data provide information about the positions and velocities of individual cars. Previous methods for assimilating Lagrangian data collected from individual cars relied on specific properties of the underlying computational model or its reformulation in Lagrangian coordinates. These approaches make it hard to assimilate both Eulerian density and Lagrangian positional data simultaneously. In this paper, we propose an alternative approach that allows us to assimilate both Eulerian and Lagrangian data. We show that the proposed algorithm is accurate and works well in different traffic scenarios and regardless of whether ensemble Kalman or particle filters are used. We also show that the algorithm is capable of estimating parameters and assimilating real traffic observations and synthetic observations obtained from microscopic models.
Energy functions for regularization algorithms
NASA Technical Reports Server (NTRS)
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Time-delayed chameleon: Analysis, synchronization and FPGA implementation
NASA Astrophysics Data System (ADS)
Rajagopal, Karthikeyan; Jafari, Sajad; Laarem, Guessas
2017-12-01
In this paper we report a time-delayed chameleon-like chaotic system which can belong to different families of chaotic attractors depending on the choices of parameters. Such a characteristic of self-excited and hidden chaotic flows in a simple 3D system with time delay has not been reported earlier. Dynamic analysis of the proposed time-delayed systems are analysed in time-delay space and parameter space. A novel adaptive modified functional projective lag synchronization algorithm is derived for synchronizing identical time-delayed chameleon systems with uncertain parameters. The proposed time-delayed systems and the synchronization algorithm with controllers and parameter estimates are then implemented in FPGA using hardware-software co-simulation and the results are presented.
NASA Astrophysics Data System (ADS)
Leitão, J. P.; Carbajal, J. P.; Rieckermann, J.; Simões, N. E.; Sá Marques, A.; de Sousa, L. M.
2018-01-01
The activation of available in-sewer storage volume has been suggested as a low-cost flood and combined sewer overflow mitigation measure. However, it is currently unknown what the attributes for suitable objective functions to identify the best location for flow control devices are and the impact of those attributes on the results. In this study, we present a novel location model and efficient algorithm to identify the best location(s) to install flow limiters. The model is a screening tool that does not require hydraulic simulations but rather considers steady state instead of simplistic static flow conditions. It also maximises in-sewer storage according to different reward functions that also considers the potential impact of flow control device failure. We demonstrate its usefulness on two real sewer networks, for which an in-sewer storage potential of approximately 2,000 m3 and 500 m3 was estimated with five flow control devices installed.
The Current Status of Unsteady CFD Approaches for Aerodynamic Flow Control
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Singer, Bart A.; Yamaleev, Nail; Vatsa, Veer N.; Viken, Sally A.; Atkins, Harold L.
2002-01-01
An overview of the current status of time dependent algorithms is presented. Special attention is given to algorithms used to predict fluid actuator flows, as well as other active and passive flow control devices. Capabilities for the next decade are predicted, and principal impediments to the progress of time-dependent algorithms are identified.
Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system
NASA Astrophysics Data System (ADS)
Duran, Ahmet; Tuncel, Mehmet
2016-10-01
It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model
NASA Astrophysics Data System (ADS)
Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber
2016-04-01
In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.
Dynamic Capacity Allocation Algorithms for iNET Link Manager
2014-05-01
algorithm that can better cope with severe congestion and misbehaving users and traffic flows. We compare the E-LM with the LM baseline algorithm (B-LM...capacity allocation algorithm that can better cope with severe congestion and misbehaving users and traffic flows. We compare the E-LM with the LM
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-05-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-09-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2006-01-01
The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods
NASA Technical Reports Server (NTRS)
Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon
2010-01-01
A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.
Zhang, Yang; Wang, Yuan; He, Wenbo; Yang, Bin
2014-01-01
A novel Particle Tracking Velocimetry (PTV) algorithm based on Voronoi Diagram (VD) is proposed and briefed as VD-PTV. The robustness of VD-PTV for pulsatile flow is verified through a test that includes a widely used artificial flow and a classic reference algorithm. The proposed algorithm is then applied to visualize the flow in an artificial abdominal aortic aneurysm included in a pulsatile circulation system that simulates the aortic blood flow in human body. Results show that, large particles tend to gather at the upstream boundary because of the backflow eddies that follow the pulsation. This qualitative description, together with VD-PTV, has laid a foundation for future works that demand high-level quantification.
A Network Flow Approach to the Initial Skills Training Scheduling Problem
2007-12-01
include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses
Feasibility of a special-purpose computer to solve the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Gritton, E. C.; King, W. S.; Sutherland, I.; Gaines, R. S.; Gazley, C., Jr.; Grosch, C.; Juncosa, M.; Petersen, H.
1978-01-01
Orders-of-magnitude improvements in computer performance can be realized with a parallel array of thousands of fast microprocessors. In this architecture, wiring congestion is minimized by limiting processor communication to nearest neighbors. When certain standard algorithms are applied to a viscous flow problem and existing LSI technology is used, performance estimates of this conceptual design show a dramatic decrease in computational time when compared to the CDC 7600.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
Liu, Ruolin; Dickerson, Julie
2017-11-01
We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.
Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.
Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian
2013-01-01
As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.
NASA Astrophysics Data System (ADS)
Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra
2016-04-01
Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.
Chong, Ka Chun; Zee, Benny Chung Ying; Wang, Maggie Haitian
2018-04-10
In an influenza pandemic, arrival times of cases are a proxy of the epidemic size and disease transmissibility. Because of intense surveillance of travelers from infected countries, detection is more rapid and complete than on local surveillance. Travel information can provide a more reliable estimation of transmission parameters. We developed an Approximate Bayesian Computation algorithm to estimate the basic reproduction number (R 0 ) in addition to the reporting rate and unobserved epidemic start time, utilizing travel, and routine surveillance data in an influenza pandemic. A simulation was conducted to assess the sampling uncertainty. The estimation approach was further applied to the 2009 influenza A/H1N1 pandemic in Mexico as a case study. In the simulations, we showed that the estimation approach was valid and reliable in different simulation settings. We also found estimates of R 0 and the reporting rate to be 1.37 (95% Credible Interval [CI]: 1.26-1.42) and 4.9% (95% CI: 0.1%-18%), respectively, in the 2009 influenza pandemic in Mexico, which were robust to variations in the fixed parameters. The estimated R 0 was consistent with that in the literature. This method is useful for officials to obtain reliable estimates of disease transmissibility for strategic planning. We suggest that improvements to the flow of reporting for confirmed cases among patients arriving at different countries are required. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure
NASA Astrophysics Data System (ADS)
Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong
2011-08-01
We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.
NASA Astrophysics Data System (ADS)
Moeys, J.; Larsbo, M.; Bergström, L.; Brown, C. D.; Coquet, Y.; Jarvis, N. J.
2012-07-01
Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedo)transfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved. Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42). Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = -0.26) due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72). Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is probably more important than the estimation of water flow parameters. Overall, the results are encouraging for the use of this modelling set-up to estimate pesticide leaching risks at the regional-scale, especially where the objective is to identify vulnerable soils and "source" areas of contamination.
Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1980-01-01
An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.
Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I
2010-01-01
The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.
ECOUL: an interactive computer tool to study hydraulic behavior of swelling and rigid soils
NASA Astrophysics Data System (ADS)
Perrier, Edith; Garnier, Patricia; Leclerc, Christian
2002-11-01
ECOUL is an interactive, didactic software package which simulates vertical water flow in unsaturated soils. End-users are given an easily-used tool to predict the evolution of the soil water profile, with a large range of possible boundary conditions, through a classical numerical solution scheme for the Richards equation. Soils must be characterized by water retention curves and hydraulic conductivity curves, the form of which can be chosen among different analytical expressions from the literature. When the parameters are unknown, an inverse method is provided to estimate them from available experimental flow data. A significant original feature of the software is to include recent algorithms extending the water flow model to deal with deforming porous media: widespread swelling soils, the volume of which varies as a function of water content, must be described by a third hydraulic characteristic property, the deformation curve. Again, estimation of the parameters by means of inverse procedures and visualization facilities enable exploration, understanding and then prediction of soil hydraulic behavior under various experimental conditions.
An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei
2013-06-01
Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.
Automated contact angle estimation for three-dimensional X-ray microtomography data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.; Moriarty, Dylan; Yoon, Hongkyu
2015-11-10
Multiphase flow in capillary regimes is a fundamental process in a number of geoscience applications. The ability to accurately define wetting characteristics of porous media can have a large impact on numerical models. In this paper, a newly developed automated three-dimensional contact angle algorithm is described and applied to high-resolution X-ray microtomography data from multiphase bead pack experiments with varying wettability characteristics. The algorithm calculates the contact angle by finding the angle between planes fit to each solid/fluid and fluid/fluid interface in the region surrounding each solid/fluid/fluid contact point. Results show that the algorithm is able to reliably compute contactmore » angles using the experimental data. The in situ contact angles are typically larger than flat surface laboratory measurements using the same material. Furthermore, wetting characteristics in mixed-wet systems also change significantly after displacement cycles.« less
Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F
2009-03-01
Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
Predictability of the Lagrangian Motion in the Upper Ocean
NASA Astrophysics Data System (ADS)
Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.
2001-12-01
The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.
Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, U. J.; Supple, J. R.; Franich, R. D.
2013-10-15
Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L.more » Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7.5 mm across algorithms for scenarios I to III, respectively. The greatest accuracy was exhibited by the original Horn and Schunck optical flow algorithm. In this case, for scenario III (erased FMs not contributing to driving the DIR calculation), the mean error was half that of the modified demons algorithm (which exhibited the greatest error), across all deformations. Some algorithms failed to reproduce the geometry at all, while others accurately deformed high contrast features but not low-contrast regions—indicating poor interpolation between landmarks.Conclusions: The accuracy of DIR algorithms was quantitatively evaluated using a tissue equivalent, mass, and density conserving DEFGEL phantom. For the model studied, optical flow algorithms performed better than demons algorithms, with the original Horn and Schunck performing best. The degree of error is influenced more by the magnitude of displacement than the geometric complexity of the deformation. As might be expected, deformation is estimated less accurately for low-contrast regions than for high-contrast features, and the method presented here allows quantitative analysis of the differences. The evaluation of registration accuracy through observation of the same high contrast features that drive the DIR calculation is shown to be circular and hence misleading.« less
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
NASA Astrophysics Data System (ADS)
Assari, Amin; Mohammadi, Zargham
2017-09-01
Karst systems show high spatial variability of hydraulic parameters over small distances and this makes their modeling a difficult task with several uncertainties. Interconnections of fractures have a major role on the transport of groundwater, but many of the stochastic methods in use do not have the capability to reproduce these complex structures. A methodology is presented for the quantification of tortuosity using the single normal equation simulation (SNESIM) algorithm and a groundwater flow model. A training image was produced based on the statistical parameters of fractures and then used in the simulation process. The SNESIM algorithm was used to generate 75 realizations of the four classes of fractures in a karst aquifer in Iran. The results from six dye tracing tests were used to assign hydraulic conductivity values to each class of fractures. In the next step, the MODFLOW-CFP and MODPATH codes were consecutively implemented to compute the groundwater flow paths. The 9,000 flow paths obtained from the MODPATH code were further analyzed to calculate the tortuosity factor. Finally, the hydraulic conductivity values calculated from the dye tracing experiments were refined using the actual flow paths of groundwater. The key outcomes of this research are: (1) a methodology for the quantification of tortuosity; (2) hydraulic conductivities, that are incorrectly estimated (biased low) with empirical equations that assume Darcian (laminar) flow with parallel rather than tortuous streamlines; and (3) an understanding of the scale-dependence and non-normal distributions of tortuosity.
Modeling chemical gradients in sediments under losing and gaining flow conditions: The GRADIENT code
NASA Astrophysics Data System (ADS)
Boano, Fulvio; De Falco, Natalie; Arnon, Shai
2018-02-01
Interfaces between sediments and water bodies often represent biochemical hotspots for nutrient reactions and are characterized by steep concentration gradients of different reactive solutes. Vertical profiles of these concentrations are routinely collected to obtain information on nutrient dynamics, and simple codes have been developed to analyze these profiles and determine the magnitude and distribution of reaction rates within sediments. However, existing publicly available codes do not consider the potential contribution of water flow in the sediments to nutrient transport, and their applications to field sites with significant water-borne nutrient fluxes may lead to large errors in the estimated reaction rates. To fill this gap, the present work presents GRADIENT, a novel algorithm to evaluate distributions of reaction rates from observed concentration profiles. GRADIENT is a Matlab code that extends a previously published framework to include the role of nutrient advection, and provides robust estimates of reaction rates in sediments with significant water flow. This work discusses the theoretical basis of the method and shows its performance by comparing the results to a series of synthetic data and to laboratory experiments. The results clearly show that in systems with losing or gaining fluxes, the inclusion of such fluxes is critical for estimating local and overall reaction rates in sediments.
Application of fault factor method to fault detection and diagnosis for space shuttle main engine
NASA Astrophysics Data System (ADS)
Cha, Jihyoung; Ha, Chulsu; Ko, Sangho; Koo, Jaye
2016-09-01
This paper deals with an application of the multiple linear regression algorithm to fault detection and diagnosis for the space shuttle main engine (SSME) during a steady state. In order to develop the algorithm, the energy balance equations, which balances the relation among pressure, mass flow rate and power at various locations within the SSME, are obtained. Then using the measurement data of some important parameters of the engine, fault factors which reflects the deviation of each equation from the normal state are estimated. The probable location of each fault and the levels of severity can be obtained from the estimated fault factors. This process is numerically demonstrated for the SSME at 104% Rated Propulsion Level (RPL) by using the simulated measurement data from the mathematical models of the engine. The result of the current study is particularly important considering that the recently developed reusable Liquid Rocket Engines (LREs) have staged-combustion cycles similarly to the SSME.
Augmented Topological Descriptors of Pore Networks for Material Science.
Ushizima, D; Morozov, D; Weber, G H; Bianchi, A G C; Sethian, J A; Bethel, E W
2012-12-01
One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.
Local multiplicative Schwarz algorithms for convection-diffusion equations
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Sarkis, Marcus
1995-01-01
We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.
NASA Astrophysics Data System (ADS)
Kavka, P.; Jeřábek, J.; Strouhal, L.
2016-12-01
The contribution presents a numerical model SMODERP that is used for calculation and prediction of surface runoff and soil erosion from agricultural land. The physically based model includes the processes of infiltration (Phillips equation), surface runoff routing (kinematic wave based equation), surface retention, surface roughness and vegetation impact on runoff. The model is being developed at the Department of Irrigation, Drainage and Landscape Engineering, Civil Engineering Faculty, CTU in Prague. 2D version of the model was introduced in last years. The script uses ArcGIS system tools for data preparation. The physical relations are implemented through Python scripts. The main computing part is stand alone in numpy arrays. Flow direction is calculated by Steepest Descent algorithm and in multiple flow algorithm. Sheet flow is described by modified kinematic wave equation. Parameters for five different soil textures were calibrated on the set of hundred measurements performed on the laboratory and filed rainfall simulators. Spatially distributed models enable to estimate not only surface runoff but also flow in the rills. Development of the rills is based on critical shear stress and critical velocity. For modelling of the rills a specific sub model was created. This sub model uses Manning formula for flow estimation. Flow in the ditches and streams are also computed. Numerical stability of the model is controled by Courant criterion. Spatial scale is fixed. Time step is dynamic and depends on the actual discharge. The model is used in the framework of the project "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Main goal of the project is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The methodology will account for the choice of the simulation model. Several representatives of practically oriented models (SMODERP is one of them) will be tested for the output sensitivity to selected precipitation scenario comparing to variability connected with other inputs uncertainty. The research was supported by the grant QJ1520265 of the Czech Ministry of Agriculture.
Swirling Flow Computation at the Trailing Edge of Radial-Axial Hydraulic Turbines
NASA Astrophysics Data System (ADS)
Susan-Resiga, Romeo; Muntean, Sebastian; Popescu, Constantin
2016-11-01
Modern hydraulic turbines require optimized runners within a range of operating points with respect to minimum weighted average draft tube losses and/or flow instabilities. Tractable optimization methodologies must include realistic estimations of the swirling flow exiting the runner and further ingested by the draft tube, prior to runner design. The paper presents a new mathematical model and the associated numerical algorithm for computing the swirling flow at the trailing edge of Francis turbine runner, operated at arbitrary discharge. The general turbomachinery throughflow theory is particularized for an arbitrary hub-to-shroud line in the meridian half-plane and the resulting boundary value problem is solved with the finite element method. The results obtained with the present model are validated against full 3D runner flow computations within a range of discharge value. The mathematical model incorporates the full information for the relative flow direction, as well as the curvatures of the hub-to-shroud line and meridian streamlines, respectively. It is shown that the flow direction can be frozen within a range of operating points in the neighborhood of the best efficiency regime.
NASA Technical Reports Server (NTRS)
Bentley, P. B.
1975-01-01
The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
NASA Astrophysics Data System (ADS)
Kuntoro, Hadiyan Yusuf; Hudaya, Akhmad Zidni; Dinaryanto, Okto; Majid, Akmal Irfan; Deendarlianto
2016-06-01
Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methods and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (hL) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuntoro, Hadiyan Yusuf, E-mail: hadiyan.y.kuntoro@mail.ugm.ac.id; Majid, Akmal Irfan; Deendarlianto, E-mail: deendarlianto@ugm.ac.id
Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methodsmore » and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (h{sub L}) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.« less
McVoy, Christopher; Park, Winifred A.; Obeysekera, Jayantha
1996-01-01
Preservation and restoration of the remaining Everglades ecosystem is focussed on two aspects: improving upstream water quality and improving 'hydropatterns' - the timing, depth and flow of surface water. Restoration of hydropatterns requires knowledge of the original pre-canal drainage conditions as well as an understanding of the soil, topo-graphic, and vegetation changes that have taken place since canal drainage began in the 1880's. The Natural System Model (NSM), developed by the South Florida Water Management District (SFWMD) and Everglades National Park, uses estimates of pre-drainage vegetation and topography to estimate the pre-drainage hydrologic response of the Everglades. Sources of model uncertainty include: (1) the algorithms, (2) the parameters (particularly those relating to vegetation roughness and evapotranspiration), and (3) errors in the assumed pre-drainage vegetation distribution and pre-drainage topography. Other studies are concentrating on algorithmic and parameter sources of uncertainty. In this study we focus on the NSM output -- predicted hydropattern -- and evaluate this by comparison with all available direct and indirect information on pre-drainage hydropatterns. The unpublished and published literature is being searched exhaustively for observations of water depth, flow direction, flow velocity and hydroperiod, during the period prior and just after drainage (1840-1920). Additionally, a comprehensive map of soils in the Everglades region, prepared in the 1940's by personnel from the University of Florida Agricultural Experiment Station, the U.S. Soil Conservation Service, the U.S. Geological Survey, and the Everglades Drainage District, is being used to identify wetland soils and to infer the spatial distribution of pre-drainage hydrologic conditions. Detailed study of this map and other early soil and vegetation maps in light of the history of drainage activities will reveal patterns of change and possible errors in the input to the NSM. Changes in the wetland soils are important because of their effects on topography (soil subsidence) and in their role as indicators of hydropattern.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
A Backward-Lagrangian-Stochastic Footprint Model for the Urban Environment
NASA Astrophysics Data System (ADS)
Wang, Chenghao; Wang, Zhi-Hua; Yang, Jiachuan; Li, Qi
2018-02-01
Built terrains, with their complexity in morphology, high heterogeneity, and anthropogenic impact, impose substantial challenges in Earth-system modelling. In particular, estimation of the source areas and footprints of atmospheric measurements in cities requires realistic representation of the landscape characteristics and flow physics in urban areas, but has hitherto been heavily reliant on large-eddy simulations. In this study, we developed physical parametrization schemes for estimating urban footprints based on the backward-Lagrangian-stochastic algorithm, with the built environment represented by street canyons. The vertical profile of mean streamwise velocity is parametrized for the urban canopy and boundary layer. Flux footprints estimated by the proposed model show reasonable agreement with analytical predictions over flat surfaces without roughness elements, and with experimental observations over sparse plant canopies. Furthermore, comparisons of canyon flow and turbulence profiles and the subsequent footprints were made between the proposed model and large-eddy simulation data. The results suggest that the parametrized canyon wind and turbulence statistics, based on the simple similarity theory used, need to be further improved to yield more realistic urban footprint modelling.
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
Modeling long-term suspended-sediment export from an undisturbed forest catchment
NASA Astrophysics Data System (ADS)
Zimmermann, Alexander; Francke, Till; Elsenbeer, Helmut
2013-04-01
Most estimates of suspended sediment yields from humid, undisturbed, and geologically stable forest environments fall within a range of 5 - 30 t km-2 a-1. These low natural erosion rates in small headwater catchments (≤ 1 km2) support the common impression that a well-developed forest cover prevents surface erosion. Interestingly, those estimates originate exclusively from areas with prevailing vertical hydrological flow paths. Forest environments dominated by (near-) surface flow paths (overland flow, pipe flow, and return flow) and a fast response to rainfall, however, are not an exceptional phenomenon, yet only very few sediment yields have been estimated for these areas. Not surprisingly, even fewer long-term (≥ 10 years) records exist. In this contribution we present our latest research which aims at quantifying long-term suspended-sediment export from an undisturbed rainforest catchment prone to frequent overland flow. A key aspect of our approach is the application of machine-learning techniques (Random Forest, Quantile Regression Forest) which allows not only the handling of non-Gaussian data, non-linear relations between predictors and response, and correlations between predictors, but also the assessment of prediction uncertainty. For the current study we provided the machine-learning algorithms exclusively with information from a high-resolution rainfall time series to reconstruct discharge and suspended sediment dynamics for a 21-year period. The significance of our results is threefold. First, our estimates clearly show that forest cover does not necessarily prevent erosion if wet antecedent conditions and large rainfalls coincide. During these situations, overland flow is widespread and sediment fluxes increase in a non-linear fashion due to the mobilization of new sediment sources. Second, our estimates indicate that annual suspended sediment yields of the undisturbed forest catchment show large fluctuations. Depending on the frequency of large events, annual suspended-sediment yield varies between 74 - 416 t km-2 a-1. Third, the estimated sediment yields exceed former benchmark values by an order of magnitude and provide evidence that the erosion footprint of undisturbed, forested catchments can be undistinguishable from that of sustainably managed, but hydrologically less responsive areas. Because of the susceptibility to soil loss we argue that any land use should be avoided in natural erosion hotspots.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-01-01
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
Learning partial differential equations via data discovery and sparse optimization
NASA Astrophysics Data System (ADS)
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization.
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection. PMID:28265183
A solution algorithm for fluid–particle flows across all flow regimes
Kong, Bo; Fox, Rodney O.
2017-05-12
Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less
A solution algorithm for fluid-particle flows across all flow regimes
NASA Astrophysics Data System (ADS)
Kong, Bo; Fox, Rodney O.
2017-09-01
Many fluid-particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are close-packed as well as very dilute regions where particle-particle collisions are rare. Thus, in order to simulate such fluid-particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in the flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas-particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid-particle flows.
A solution algorithm for fluid–particle flows across all flow regimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.
Many fluid–particle flows occurring in nature and in technological applications exhibit large variations in the local particle volume fraction. For example, in circulating fluidized beds there are regions where the particles are closepacked as well as very dilute regions where particle–particle collisions are rare. Thus, in order to simulate such fluid–particle systems, it is necessary to design a flow solver that can accurately treat all flow regimes occurring simultaneously in the same flow domain. In this work, a solution algorithm is proposed for this purpose. The algorithm is based on splitting the free-transport flux solver dynamically and locally in themore » flow. In close-packed to moderately dense regions, a hydrodynamic solver is employed, while in dilute to very dilute regions a kinetic-based finite-volume solver is used in conjunction with quadrature-based moment methods. To illustrate the accuracy and robustness of the proposed solution algorithm, it is implemented in OpenFOAM for particle velocity moments up to second order, and applied to simulate gravity-driven, gas–particle flows exhibiting cluster-induced turbulence. By varying the average particle volume fraction in the flow domain, it is demonstrated that the flow solver can handle seamlessly all flow regimes present in fluid–particle flows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Munk, Jeffrey D; Gehl, Anthony C
2015-06-01
A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed onmore » June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study« less
NIMBUS-7 ERB MATGEN Science Document
NASA Technical Reports Server (NTRS)
Soule, H. V.
1983-01-01
The ERB algorithms and computer software data flow used to convert sensor data into equivalent radiometric data are described in detail. The NIMBUS satellite location, orientation and sensor orientation algorithms are given. The computer housekeeping and data flow and sensor/data status algorithms are also given.
Multiscale computations with a wavelet-adaptive algorithm
NASA Astrophysics Data System (ADS)
Rastigejev, Yevgenii Anatolyevich
A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok
2012-01-01
This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.
An algorithm to estimate unsteady and quasi-steady pressure fields from velocity field measurements.
Dabiri, John O; Bose, Sanjeeb; Gemmell, Brad J; Colin, Sean P; Costello, John H
2014-02-01
We describe and characterize a method for estimating the pressure field corresponding to velocity field measurements such as those obtained by using particle image velocimetry. The pressure gradient is estimated from a time series of velocity fields for unsteady calculations or from a single velocity field for quasi-steady calculations. The corresponding pressure field is determined based on median polling of several integration paths through the pressure gradient field in order to reduce the effect of measurement errors that accumulate along individual integration paths. Integration paths are restricted to the nodes of the measured velocity field, thereby eliminating the need for measurement interpolation during this step and significantly reducing the computational cost of the algorithm relative to previous approaches. The method is validated by using numerically simulated flow past a stationary, two-dimensional bluff body and a computational model of a three-dimensional, self-propelled anguilliform swimmer to study the effects of spatial and temporal resolution, domain size, signal-to-noise ratio and out-of-plane effects. Particle image velocimetry measurements of a freely swimming jellyfish medusa and a freely swimming lamprey are analyzed using the method to demonstrate the efficacy of the approach when applied to empirical data.
Magnitude of flood flows for selected annual exceedance probabilities for streams in Massachusetts
Zarriello, Phillip J.
2017-05-11
The U.S. Geological Survey, in cooperation with the Massachusetts Department of Transportation, determined the magnitude of flood flows at selected annual exceedance probabilities (AEPs) at streamgages in Massachusetts and from these data developed equations for estimating flood flows at ungaged locations in the State. Flood magnitudes were determined for the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEPs at 220 streamgages, 125 of which are in Massachusetts and 95 are in the adjacent States of Connecticut, New Hampshire, New York, Rhode Island, and Vermont. AEP flood flows were computed for streamgages using the expected moments algorithm weighted with a recently computed regional skewness coefficient for New England.Regional regression equations were developed to estimate the magnitude of floods for selected AEP flows at ungaged sites from 199 selected streamgages and for 60 potential explanatory basin characteristics. AEP flows for 21 of the 125 streamgages in Massachusetts were not used in the final regional regression analysis, primarily because of regulation or redundancy. The final regression equations used generalized least squares methods to account for streamgage record length and correlation. Drainage area, mean basin elevation, and basin storage explained 86 to 93 percent of the variance in flood magnitude from the 50- to 0.2-percent AEPs, respectively. The estimates of AEP flows at streamgages can be improved by using a weighted estimate that is based on the magnitude of the flood and associated uncertainty from the at-site analysis and the regional regression equations. Weighting procedures for estimating AEP flows at an ungaged site on a gaged stream also are provided that improve estimates of flood flows at the ungaged site when hydrologic characteristics do not abruptly change.Urbanization expressed as the percentage of imperviousness provided some explanatory power in the regional regression; however, it was not statistically significant at the 95-percent confidence level for any of the AEPs examined. The effect of urbanization on flood flows indicates a complex interaction with other basin characteristics. Another complicating factor is the assumption of stationarity, that is, the assumption that annual peak flows exhibit no significant trend over time. The results of the analysis show that stationarity does not prevail at all of the streamgages. About 27 percent of streamgages in Massachusetts and about 42 percent of streamgages in adjacent States with 20 or more years of systematic record used in the study show a significant positive trend at the 95-percent confidence level. The remaining streamgages had both positive and negative trends, but the trends were not statistically significant. Trends were shown to vary over time. In particular, during the past decade (2004–2013), peak flows were persistently above normal, which may give the impression of positive trends. Only continued monitoring will provide the information needed to determine whether recent increases in annual peak flows are a normal oscillation or a true trend.The analysis used 37 years of additional data obtained since the last comprehensive study of flood flows in Massachusetts. In addition, new methods for computing flood flows at streamgages and regionalization improved estimates of flood magnitudes at gaged and ungaged locations and better defined the uncertainty of the estimates of AEP floods.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-04-10
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-01-01
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost. PMID:28394306
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
Wave front sensing for next generation earth observation telescope
NASA Astrophysics Data System (ADS)
Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.
2017-09-01
High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.
Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.
2012-01-01
In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787
Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G
2011-07-01
In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.
High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.
Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang
2018-04-01
An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
High-speed cell recognition algorithm for ultrafast flow cytometer imaging system
NASA Astrophysics Data System (ADS)
Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang
2018-04-01
An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.
On the development of efficient algorithms for three dimensional fluid flow
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1988-01-01
The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
A new method for ultrasound detection of interfacial position in gas-liquid two-phase flow.
Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Neves, Flávio; Morales, Rigoberto E M
2014-05-22
Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe.
A New Method for Ultrasound Detection of Interfacial Position in Gas-Liquid Two-Phase Flow
Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Jr., Flávio Neves; Morales, Rigoberto E. M.
2014-01-01
Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe. PMID:24858961
Automatic Emboli Detection System for the Artificial Heart
NASA Astrophysics Data System (ADS)
Steifer, T.; Lewandowski, M.; Karwat, P.; Gawlikowski, M.
In spite of the progress in material engineering and ventricular assist devices construction, thromboembolism remains the most crucial problem in mechanical heart supporting systems. Therefore, the ability to monitor the patient's blood for clot formation should be considered an important factor in development of heart supporting systems. The well-known methods for automatic embolus detection are based on the monitoring of the ultrasound Doppler signal. A working system utilizing ultrasound Doppler is being developed for the purpose of flow estimation and emboli detection in the clinical artificial heart ReligaHeart EXT. Thesystem will be based on the existing dual channel multi-gate Doppler device with RF digital processing. A specially developed clamp-on cannula probe, equipped with 2 - 4 MHz piezoceramic transducers, enables easy system setup. We present the issuesrelated to the development of automatic emboli detection via Doppler measurements. We consider several algorithms for the flow estimation and emboli detection. We discuss their efficiency and confront them with the requirements of our experimental setup. Theoretical considerations are then met with preliminary experimental findings from a) flow studies with blood mimicking fluid and b) in-vitro flow studies with animal blood. Finally, we discuss some more methodological issues - we consider several possible approaches to the problem of verification of the accuracy of the detection system.
Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns
NASA Astrophysics Data System (ADS)
von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet
2010-09-01
A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.
Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.
von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet
2010-01-01
A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2006-01-01
This report provides a user guide for the Compressible Flow Toolbox, a collection of algorithms that solve almost 300 linear and nonlinear classical compressible flow relations. The algorithms, implemented in the popular MATLAB programming language, are useful for analysis of one-dimensional steady flow with constant entropy, friction, heat transfer, or shock discontinuities. The solutions do not include any gas dissociative effects. The toolbox also contains functions for comparing and validating the equation-solving algorithms against solutions previously published in the open literature. The classical equations solved by the Compressible Flow Toolbox are: isentropic-flow equations, Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section.), normal-shock equations, oblique-shock equations, and Prandtl-Meyer expansion equations. At the time this report was published, the Compressible Flow Toolbox was available without cost from the NASA Software Repository.
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sauchyn, David; Ilich, Nesa
2017-11-01
We combined the methods and advantages of stochastic hydrology and paleohydrology to estimate 900 years of weekly flows for the North and South Saskatchewan Rivers at Edmonton and Medicine Hat, Alberta, respectively. Regression models of water-year streamflow were constructed using historical naturalized flow data and a pool of 196 tree-ring (earlywood, latewood, and annual) ring-width chronologies from 76 sites. The tree-ring models accounted for up to 80% of the interannual variability in historical naturalized flows. We developed a new algorithm for generating stochastic time series of weekly flows constrained by the statistical properties of both the historical record and proxy streamflow data, and by the necessary condition that weekly flows correlate between the end of a year and the start of the next. A second innovation, enabled by the density of our tree-ring network, is to derive the paleohydrology from an ensemble of 100 statistically significant reconstructions at each gauge. Using paleoclimatic data to generate long series of weekly flow estimates augments the short historical record with an expanded range of hydrologic variability, including sequences of wet and dry years of greater length and severity. This unique hydrometric time series will enable evaluation of the reliability of current water supply and management systems given the range of hydroclimatic variability and extremes contained in the stochastic paleohydrology. It also could inform evaluation of the uncertainty in climate model projections, given that internal hydroclimatic variability is the dominant source of uncertainty.
Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.
Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L
2011-01-01
Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Along-the-net reconstruction of hydropower potential with consideration of anthropic alterations
NASA Astrophysics Data System (ADS)
Masoero, A.; Claps, P.; Gallo, E.; Ganora, D.; Laio, F.
2014-09-01
Even in regions with mature hydropower development, requirements for stable renewable power sources suggest revision of plans of exploitation of water resources, while taking care of the environmental regulations. Mean Annual Flow (MAF) is a key parameter when trying to represent water availability for hydropower purposes. MAF is usually determined in ungauged basins by means of regional statistical analysis. For this study a regional estimation method consistent along-the-river network has been developed for MAF estimation; the method uses a multi-regressive approach based on geomorphoclimatic descriptors, and it is applied on 100 gauged basins located in NW Italy. The method has been designed to keep the estimates of mean annual flow congruent at the confluences, by considering only raster-summable explanatory variables. Also, the influence of human alterations in the regional analysis of MAF has been studied: impact due to the presence of existing hydropower plants has been taken into account, restoring the "natural" value of runoff through analytical corrections. To exemplify the representation of the assessment of residual hydropower potential, the model has been applied extensively to two specific mountain watersheds by mapping the estimated mean flow for the basins draining into each pixel of a the DEM-derived river network. Spatial algorithms were developed using the OpenSource Software GRASS GIS and PostgreSQL/PostGIS. Spatial representation of the hydropower potential was obtained using different mean flow vs hydraulic-head relations for each pixel. Final potential indices have been represented and mapped through the Google Earth platform, providing a complete and interactive picture of the available potential, useful for planning and regulation purposes.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1996-01-01
A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.
Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport
NASA Astrophysics Data System (ADS)
Ebtehaj, Isa; Bonakdari, Hossein
2017-12-01
Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.
Wen, Ying; Hou, Lili; He, Lianghua; Peterson, Bradley S; Xu, Dongrong
2015-05-01
Spatial normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional spatial normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention. Copyright © 2015 Elsevier Inc. All rights reserved.
An algorithmic approach to the brain biopsy--part I.
Kleinschmidt-DeMasters, B K; Prayson, Richard A
2006-11-01
The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part I, we assist the resident in learning how to handle brain biopsies in general and how to distinguish nonneoplastic lesions that mimic tumors from true neoplasms.
An algorithmic approach to the brain biopsy--part II.
Prayson, Richard A; Kleinschmidt-DeMasters, B K
2006-11-01
The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H
2007-11-01
Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Dynamic airspace configuration algorithms for next generation air transportation system
NASA Astrophysics Data System (ADS)
Wei, Jian
The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve the robustness and efficiency of the graph based DAC algorithm by incorporating the Multilevel Graph Partitioning (MGP) method into the graph model, and develop a MGP based sectorization algorithm for DAC in the en route airspace. In a comprehensive benefit analysis, the performance of the proposed algorithms are tested in numerical simulations with Enhanced Traffic Management System (ETMS) data. Simulation results demonstrate that the algorithmically generated sectorizations outperform the current sectorizations in different sectors for different time periods. Secondly, based on our experience with DAC in the en route airspace, we further study the sectorization problem for DAC in the terminal airspace. The differences between the en route and terminal airspace are identified, and their influence on the terminal sectorization is analyzed. After adjusting the graph model to better capture the unique characteristics of the terminal airspace and the requirements of terminal sectorization, we develop a graph based geometric sectorization algorithm for DAC in the terminal airspace. Moreover, the graph based model is combined with the region based sector design method to better handle the complicated geometric and operational constraints in the terminal sectorization problem. In the benefit analysis, we identify the contributing factors to terminal controller workload, define evaluation metrics, and develop a bebefit analysis framework for terminal sectorization evaluation. With the evaluation framework developed, we demonstrate the improvements on the current sectorizations with real traffic data collected from several major international airports in the U.S., and conduct a detailed analysis on the potential benefits of dynamic reconfiguration in the terminal airspace. Finally, in addition to the research on the macroscopic behavior of a large number of aircraft, we also study the dynamical behavior of individual aircraft from the perspective of traffic flow management. We formulate the mode-confusion problem as hybrid estimation problem, and develop a state estimation algorithm for the linear hybrid system with continuous-state-dependent transitions based on sparse observations. We also develop an estimated time of arrival prediction algorithm based on the state-dependent transition hybrid estimation algorithm, whose performance is demonstrated with simulations on the landing procedure following the Continuous Descend Approach (CDA) profile.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Estimation of River Bathymetry from ATI-SAR Data
NASA Astrophysics Data System (ADS)
Almeida, T. G.; Walker, D. T.; Farquharson, G.
2013-12-01
A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
Analysis of Cellular DNA Content by Flow Cytometry.
Darzynkiewicz, Zbigniew; Huang, Xuan; Zhao, Hong
2017-10-02
Cellular DNA content can be measured by flow cytometry with the aim of : (1) revealing cell distribution within the major phases of the cell cycle, (2) estimating frequency of apoptotic cells with fractional DNA content, and/or (3) disclosing DNA ploidy of the measured cell population. In this unit, simple and universally applicable methods for staining fixed cells are presented, as are methods that utilize detergents and/or proteolytic treatment to permeabilize cells and make DNA accessible to fluorochrome. Additionally, supravital cell staining with Hoechst 33342, which is primarily used for sorting live cells based on DNA-content differences for their subsequent culturing, is described. Also presented are methods for staining cell nuclei isolated from paraffin-embedded tissues. Available algorithms are listed for deconvolution of DNA-content-frequency histograms to estimate percentage of cells in major phases of the cell cycle and frequency of apoptotic cells with fractional DNA content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.
Analysis of Cellular DNA Content by Flow Cytometry.
Darzynkiewicz, Zbigniew; Huang, Xuan; Zhao, Hong
2017-11-01
Cellular DNA content can be measured by flow cytometry with the aim of : (1) revealing cell distribution within the major phases of the cell cycle, (2) estimating frequency of apoptotic cells with fractional DNA content, and/or (3) disclosing DNA ploidy of the measured cell population. In this unit, simple and universally applicable methods for staining fixed cells are presented, as are methods that utilize detergents and/or proteolytic treatment to permeabilize cells and make DNA accessible to fluorochrome. Additionally, supravital cell staining with Hoechst 33342, which is primarily used for sorting live cells based on DNA-content differences for their subsequent culturing, is described. Also presented are methods for staining cell nuclei isolated from paraffin-embedded tissues. Available algorithms are listed for deconvolution of DNA-content-frequency histograms to estimate percentage of cells in major phases of the cell cycle and frequency of apoptotic cells with fractional DNA content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.
OLYMPEX Data Workshop: GPM View
NASA Technical Reports Server (NTRS)
Petersen, W.
2017-01-01
OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
NASA Astrophysics Data System (ADS)
Beyer, W. K. G.
The estimation accuracy of the group delay measured in a single video frequency band was analyzed as a function of the system bandwidth and the signal to noise ratio. Very long base interferometry (VLBI) measurements from geodetic experiments were used to check the geodetic applicability of the Mark 2 evaluation system. The geodetic observation quantities and the correlation geometry are introduced. The data flow in the VLBI experiment, the correlation analysis, the analyses and evaluation in the MK2 system, and the delay estimation procedure following the least squares method are presented. It is shown that the MK2 system is no longer up to date for geodetic applications. The superiority of the developed estimation method with respect to the interpolation algorithm is demonstrated. The numerical investigations show the deleterious influence of the distorting bit shift effects.
The silent base flow and the sound sources in a laminar jet.
Sinayoko, Samuel; Agarwal, Anurag
2012-03-01
An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America
Calculating Shocks In Flows At Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Palmer, Grant
1988-01-01
Boundary conditions prove critical. Conference paper describes algorithm for calculation of shocks in hypersonic flows of gases at chemical equilibrium. Although algorithm represents intermediate stage in development of reliable, accurate computer code for two-dimensional flow, research leading up to it contributes to understanding of what is needed to complete task.
Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.
2003-01-01
This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.
A novel downlink scheduling strategy for traffic communication system based on TD-LTE technology.
Chen, Ting; Zhao, Xiangmo; Gao, Tao; Zhang, Licheng
2016-01-01
There are many existing classical scheduling algorithms which can obtain better system throughput and user equality, however, they are not designed for traffic transportation environment, which cannot consider whether the transmission performance of various information flows could meet comprehensive requirements of traffic safety and delay tolerance. This paper proposes a novel downlink scheduling strategy for traffic communication system based on TD-LTE technology, which can perform two classification mappings for various information flows in the eNodeB: firstly, associate every information flow packet with traffic safety importance weight according to its relevance to the traffic safety; secondly, associate every traffic information flow with service type importance weight according to its quality of service (QoS) requirements. Once the connection is established, at every scheduling moment, scheduler would decide the scheduling order of all buffers' head of line packets periodically according to the instant value of scheduling importance weight function, which calculated by the proposed algorithm. From different scenario simulations, it can be verified that the proposed algorithm can provide superior differentiated transmission service and reliable QoS guarantee to information flows with different traffic safety levels and service types, which is more suitable for traffic transportation environment compared with the existing popularity PF algorithm. With the limited wireless resource, information flow closed related to traffic safety will always obtain priority scheduling right timely, which can help the passengers' journey more safe. Moreover, the proposed algorithm cannot only obtain good flow throughput and user fairness which are almost equal to those of the PF algorithm without significant differences, but also provide better realtime transmission guarantee to realtime information flow.
Implicit, nonswitching, vector-oriented algorithm for steady transonic flow
NASA Technical Reports Server (NTRS)
Lottati, I.
1983-01-01
A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
Luján, Manel; Sogo, Ana; Pomares, Xavier; Monsó, Eduard; Sales, Bernat; Blanch, Lluís
2013-05-01
New home ventilators are able to provide clinicians data of interest through built-in software. Monitoring of tidal volume (VT) is a key point in the assessment of the efficacy of home mechanical ventilation. To assess the reliability of the VT provided by 5 ventilators in a bench test. Five commercial ventilators from 4 different manufacturers were tested in pressure support mode with the help of a breathing simulator under different conditions of mechanical respiratory pattern, inflation pressure, and intentional leakage. Values provided by the built-in software of each ventilator were compared breath to breath with the VT monitored through an external pneumotachograph. Ten breaths for each condition were compared for every tested situation. All tested ventilators underestimated VT (ranges of -21.7 mL to -83.5 mL, which corresponded to -3.6% to -14.7% of the externally measured VT). A direct relationship between leak and underestimation was found in 4 ventilators, with higher underestimations of the VT when the leakage increased, ranging between -2.27% and -5.42% for each 10 L/min increase in the leakage. A ventilator that included an algorithm that computes the pressure loss through the tube as a function of the flow exiting the ventilator had the minimal effect of leaks on the estimation of VT (0.3%). In 3 ventilators the underestimation was also influenced by mechanical pattern (lower underestimation with restrictive, and higher with obstructive). The inclusion of algorithms that calculate the pressure loss as a function of the flow exiting the ventilator in commercial models may increase the reliability of VT estimation.
Optical Flow Experiments for Small-Body Navigation
NASA Astrophysics Data System (ADS)
Schmidt, A.; Kueppers, M.
2012-09-01
Optical Flow algorithms [1, 2] have been successfully used and been robustly implemented in many application domains from motion estimation to video compression. We argue that they also show potential for autonomous spacecraft payload operation around small solar system bodies, such as comets or asteroids. Operating spacecraft around small bodies in close distance provides numerous challenges, many of which are related to uncertainties in spacecraft position and velocity relative to a body. To make best use of usually scarce resource, it would be good to grant a certain amount of autonomy to a spacecraft, for example, to make time-critical decisions when to operate the payload. The Optical Flow describes is the apparent velocities of common, usually brightness-related features in at least two images. From it, one can make estimates about the spacecraft velocity and direction relative to the last manoeuvre or known state. The authors have conducted experiments with readily-available optical imagery using the relatively robust and well-known Lucas-Kanade method [3]; it was found to be applicable in a large number of cases. Since one of the assumptions is that the brightness of corresponding points in subsequent images does not change greatly, it is important that imagery is acquired at sensible intervals, during which illumination conditions can be assumed constant and the spacecraft does not move too far so that there is significant overlap. Full-frame Optical Flow can be computationally more expensive than image compression and usually focuses on movements of regions with significant brightness-gradients. However, given that missions which explore small bodies move at low relative velocities, computation time is not expected to be a limiting resource. Since there are now several missions which either have flown to small bodies or are planned to visit small bodies and stay there for some time, it shows potential to explore how instrument operations can benefit from the additional knowledge that is gained from analysing readily available data on-board. The algorithms for Optical Flow show the maturity that is necessary to be considered in safety-critical systems; their use can be complemented with shape models, pattern matching, housekeeping data and navigation techniques to obtain even more accurate information.
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
NASA Astrophysics Data System (ADS)
Ruan, Zhixing; Guo, Huadong; Liu, Guang; Yan, Shiyong
2014-01-01
Glacier movement is closely related to changes in climatic, hydrological, and geological factors. However, detecting glacier surface flow velocity with conventional ground surveys is challenging. Remote sensing techniques, especially synthetic aperture radar (SAR), provide regular observations covering larger-scale glacier regions. Glacier surface flow velocity in the West Kunlun Mountains using modified offset-tracking techniques based on ALOS/PALSAR images is estimated. Three maps of glacier flow velocity for the period 2007 to 2010 are derived from procedures of offset detection using cross correlation in the Fourier domain and global offset elimination of thin plate smooth splines. Our results indicate that, on average, winter glacier motion on the North Slope is 1 cm/day faster than on the South Slope-a result which corresponds well with the local topography. The performance of our method as regards the reliability of extracted displacements and the robustness of this algorithm are discussed. The SAR-based offset tracking is proven to be reliable and robust, making it possible to investigate comprehensive glacier movement and its response mechanism to environmental change.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
A New Multifunctional Sensor for Measuring Oil/Water Two-phase State in Pipelines
NASA Astrophysics Data System (ADS)
Sun, Jinwei; Shida, Katsunori
2001-03-01
This paper presents a non-contact U form multi-functional sensor for the oil pipeline flow measurement. Totally four thin and narrow copper plates are twined on both sides of the sensor, from which two variables (capacitance, self inductance) are to be examined as the two functional outputs of the sensor. Thus, the liquid concentration (oil and water), temperature are finally evaluated. The flow velocity inside the pipeline could also be estimated by computing the cross correlation of the capacitance-pair. To restrain the effects of parasitic parameters and improve the dynamic response of the sensor, a proper shielding strategy is considered. A suitable algorithm for data reconstruction is also presented in the system design.
A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids
NASA Astrophysics Data System (ADS)
Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar
2014-11-01
We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.
An engineering study of hybrid adaptation of wind tunnel walls for three-dimensional testing
NASA Technical Reports Server (NTRS)
Brown, Clinton; Kalumuck, Kenneth; Waxman, David
1987-01-01
Solid wall tunnels having only upper and lower walls flexing are described. An algorithm for selecting the wall contours for both 2 and 3 dimensional wall flexure is presented and numerical experiments are used to validate its applicability to the general test case of 3 dimensional lifting aircraft models in rectangular cross section wind tunnels. The method requires an initial approximate representation of the model flow field at a given lift with wallls absent. The numerical methods utilized are derived by use of Green's source solutions obtained using the method of images; first order linearized flow theory is employed with Prandtl-Glauert compressibility transformations. Equations are derived for the flexed shape of a simple constant thickness plate wall under the influence of a finite number of jacks in an axial row along the plate centerline. The Green's source methods are developed to provide estimations of residual flow distortion (interferences) with measured wall pressures and wall flow inclinations as inputs.
A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2017-01-01
We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.
The Guderley problem revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D; Kamm, James R; Bolstad, John H
2009-01-01
The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shockmore » configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.« less
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.
Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Dwoyer, Douglas L.
1987-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
NASA Astrophysics Data System (ADS)
Wei, Jun; Jiang, Guo-Qing; Liu, Xin
2017-09-01
This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.
Representing pump-capacity relations in groundwater simulation models
Konikow, Leonard F.
2010-01-01
The yield (or discharge) of constant-speed pumps varies with the total dynamic head (or lift) against which the pump is discharging. The variation in yield over the operating range of the pump may be substantial. In groundwater simulations that are used for management evaluations or other purposes, where predictive accuracy depends on the reliability of future discharge estimates, model reliability may be enhanced by including the effects of head-capacity (or pump-capacity) relations on the discharge from the well. A relatively simple algorithm has been incorporated into the widely used MODFLOW groundwater flow model that allows a model user to specify head-capacity curves. The algorithm causes the model to automatically adjust the pumping rate each time step to account for the effect of drawdown in the cell and changing lift, and will shut the pump off if lift exceeds a critical value. The algorithm is available as part of a new multinode well package (MNW2) for MODFLOW.
Representing pump-capacity relations in groundwater simulati on models
Konikow, Leonard F.
2010-01-01
The yield (or discharge) of constant-speed pumps varies with the total dynamic head (or lift) against which the pump is discharging. The variation in yield over the operating range of the pump may be substantial. In groundwater simulations that are used for management evaluations or other purposes, where predictive accuracy depends on the reliability of future discharge estimates, model reliability may be enhanced by including the effects of head-capacity (or pump-capacity) relations on the discharge from the well. A relatively simple algorithm has been incorporated into the widely used MODFLOW groundwater flow model that allows a model user to specify head-capacity curves. The algorithm causes the model to automatically adjust the pumping rate each time step to account for the effect of drawdown in the cell and changing lift, and will shut the pump off if lift exceeds a critical value. The algorithm is available as part of a new multinode well package (MNW2) for MODFLOW. ?? 2009 National Ground Water Association.
Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.
Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue
2018-05-25
A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1985-01-01
A simple and computationally efficient algorithm for solving the unsteady three-dimensional boundary-layer equations in the time-accurate or relaxation mode is presented. Results of the new algorithm are shown to be in quantitative agreement with detailed experimental data for flow over a swept infinite wing. The separated flow over a 6:1 ellipsoid at angle of attack, and the transonic flow over a finite-wing with shock-induced 'mushroom' separation are also computed and compared with available experimental data. It is concluded that complex, separated, three-dimensional viscous layers can be economically and routinely computed using a time-relaxation boundary-layer algorithm.
Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”
Schroeder, Christopher L.; Hartmann, Mitra J. Z.
2012-01-01
When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi
2015-01-01
Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the aorta itself. PMID:26163442
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Huber, Christoph H; Tozzi, Piergiorgio; Hurni, Michel; von Segesser, Ludwig K
2004-06-01
The new magnetically suspended axial pump is free of seals, bearings, mechanical friction and wear. In the absence of a drive shaft or flow meter, pump flow assessment is made with an algorithm based on currents required for impeller rotation and stabilization. The aim of this study is to validate pump performance, algorithm-based flow and effective flow. A series of bovine experiments was realized after equipment with pressure transducers, continuous-cardiac-output-catheter, intracardiac ultrasound (AcuNav) over 6 h. Pump implantation was through a median sternotomy (LV-->VAD-->calibrated transonic-flow-probe-->aorta). A transonic-HT311-flow-probe was fixed onto the outflow cannula for flow comparison. Animals were electively sacrificed and at necropsy systematic pump inspection and renal embolus score was realized. Observation period was 340+/-62.4 min. The axial pump generated a mean arterial pressure of 58.8+/-14.3 mmHg (max 117 mmHg) running at a speed of 6591.3+/-1395.4 rev./min (min 5000/max 8500 rev./min) and generating 2.5+/-1.0 l/min (min 1.4/max 6.0 l/min) of flow. Correlation between the results of the pump flow algorithm and measured pump flow was linear (y=1.0339x, R2=0.9357). VAD explants were free of macroscopic thrombi. Renal embolus score was 0+/-0. The magnetically suspended axial flow pump provides excellent left ventricular support. The pump flow algorithm used is accurate and reliable. Therefore, there is no need for direct flow measurement.
Algorithms and a short description of the D1_Flow program for numerical modeling of one-dimensional steady-state flow in horizontally heterogeneous aquifers with uneven sloping bases are presented. The algorithms are based on the Dupuit-Forchheimer approximations. The program per...
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Dynamic graph cuts for efficient inference in Markov Random Fields.
Kohli, Pushmeet; Torr, Philip H S
2007-12-01
Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.
Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor
NASA Astrophysics Data System (ADS)
Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui
2018-05-01
At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Estimation of hectare-scale soil-moisture characteristics from aquifer-test data
Moench, A.F.
2003-01-01
Analysis of a 72-h, constant-rate aquifer test conducted in a coarse-grained and highly permeable, glacial outwash deposit on Cape Cod, Massachusetts revealed that drawdowns measured in 20 piezometers located at various depths below the water table and distances from the pumped well were significantly influenced by effects of drainage from the vadose zone. The influence was greatest in piezometers located close to the water table and diminished with increasing depth. The influence of the vadose zone was evident from a gap, in the intermediate-time zone, between measured drawdowns and drawdowns computed under the assumption that drainage from the vadose zone occurred instantaneously in response to a decline in the elevation of the water table. By means of an analytical model that was designed to account for time-varying drainage, simulated drawdowns could be closely fitted to measured drawdowns regardless of the piezometer locations. Because of the exceptional quality and quantity of the data and the relatively small aquifer heterogeneity, it was possible by inverse modeling to estimate all relevant aquifer parameters and a set of three empirical constants used in the upper-boundary condition to account for the dynamic drainage process. The empirical constants were used to define a one-dimensional (ID) drainage versus time curve that is assumed to be representative of the bulk material overlying the water table. The curve was inverted with a parameter estimation algorithm and a ID numerical model for variably saturated flow to obtain soil-moisture retention curves and unsaturated hydraulic conductivity relationships defined by the Brooks and Corey equations. Direct analysis of the aquifer-test data using a parameter estimation algorithm and a two-dimensional, axisymmetric numerical model for variably saturated flow yielded similar soil-moisture characteristics. Results suggest that hectare-scale soil-moisture characteristics are different from core-scale predictions and even relatively small amounts of fine-grained material and heterogeneity can dominate the large-scale soil-moisture characteristics and aquifer response. ?? 2003 Elsevier B.V. All rights reserved.
GREEN + IDMaps: A practical soulution for ensuring fairness in a biased internet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapadia, A. C.; Thulasidasan, S.; Feng, W. C.
2002-01-01
GREEN is a proactive queue-management (PQM) algorithm that removes TCP's bias against connections with longer round-trip times, while maintaining high link utilization and low packet-loss. GREEN applies knowledge of the steady-state behavior of TCP connections to proactively drop packets, thus preventing congestion from ever occurring. As a result, GREEN ensures much higher fairness between flows than other active queue management schemes like Flow Random Early Drop (FRED) and Stochastic Fair Blue (SFB), which suffer in topologies where a large number of flows have widely varying round-trip times. GREEN'S performance relies on its ability to gauge a flow's round-trip time (RTT).more » In previous work, we presented results for an ideal GREEN router which has accurate RTT information for a flow. In this paper, we present a practical solution based on IDMaps, an Internet distance-estimation service, and compare its performance to an ideal GREEN router. We show that a solution based on IDMaps is practical and maintains high fairness and link utilization, and low packet-loss rates.« less
River flow simulation using a multilayer perceptron-firefly algorithm model
NASA Astrophysics Data System (ADS)
Darbandi, Sabereh; Pourhosseini, Fatemeh Akhoni
2018-06-01
River flow estimation using records of past time series is importance in water resources engineering and management and is required in hydrologic studies. In the past two decades, the approaches based on the artificial neural networks (ANN) were developed. River flow modeling is a non-linear process and highly affected by the inputs to the modeling. In this study, the best input combination of the models was identified using the Gamma test then MLP-ANN and hybrid multilayer perceptron (MLP-FFA) is used to forecast monthly river flow for a set of time intervals using observed data. The measurements from three gauge at Ajichay watershed, East Azerbaijani, were used to train and test the models approach for the period from January 2004 to July 2016. Calibration and validation were performed within the same period for MLP-ANN and MLP-FFA models after the preparation of the required data. Statistics, the root mean square error and determination coefficient, are used to verify outputs from MLP-ANN to MLP-FFA models. The results show that MLP-FFA model is satisfactory for monthly river flow simulation in study area.
A Method for the Interpretation of Flow Cytometry Data Using Genetic Algorithms.
Angeletti, Cesar
2018-01-01
Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC) curve of 0.912. The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Autonomous Vision-Based Tethered-Assisted Rover Docking
NASA Technical Reports Server (NTRS)
Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri
2013-01-01
Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
An Experimental Study of Energy Consumption in Buildings Providing Ancillary Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yashen; Afshari, Sina; Wolfe, John
Heating, ventilation, and air conditioning (HVAC) systems in commercial buildings can provide ancillary services (AS) to the power grid, but by providing AS their energy consumption may increase. This inefficiency is evaluated using round-trip efficiency (RTE), which is defined as the ratio between the decrease and the increase in the HVAC system's energy consumption compared to the baseline consumption as a result of providing AS. This paper evaluates the RTE of a 30,000 m2 commercial building providing AS. We propose two methods to estimate the HVAC system's settling time after an AS event based on temperature and the air flowmore » measurements from the building. Experimental data gathered over a 4-month period are used to calculate the RTE for AS signals of various waveforms, magnitudes, durations, and polarities. The results indicate that the settling time estimation algorithm based on the air flow measurements obtains more accurate results compared to the temperature-based algorithm. Further, we study the impact of the AS signal shape parameters on the RTE and discuss the practical implications of our findings.« less
Motion estimation of magnetic resonance cardiac images using the Wigner-Ville and hough transforms
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Bayerl, P.; Neumann, H.
2007-12-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation of the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach. More specifically it relies on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The latter is a well-known line and shape detection method that is highly robust against incomplete data and noise. The rationale of using the HT in this context is that it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results in the case of synthetic sequences are compared with an implementation of the variational technique for local and global motion estimation, where it is shown that the results are accurate and robust to noise degradations. Results obtained with real cardiac magnetic resonance images are presented.
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.
2006-08-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
NASA Astrophysics Data System (ADS)
Geneva, Nicholas; Wang, Lian-Ping
2015-11-01
In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.
Computation of multi-dimensional viscous supersonic flow
NASA Technical Reports Server (NTRS)
Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.
1986-01-01
A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Lung tumor tracking in fluoroscopic video based on optical flow
Xu, Qianyi; Hamilton, Russell J.; Schowengerdt, Robert A.; Alexander, Brian; Jiang, Steve B.
2008-01-01
Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (∼0.7 mm) in the best case and 2.8 pixels (∼1.4 mm) in the worst case for the five patients studied. PMID:19175094
Lung tumor tracking in fluoroscopic video based on optical flow.
Xu, Qianyi; Hamilton, Russell J; Schowengerdt, Robert A; Alexander, Brian; Jiang, Steve B
2008-12-01
Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (approximately 0.7 mm) in the best case and 2.8 pixels (approximately 1.4 mm) in the worst case for the five patients studied.
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
EDDA: integrated simulation of debris flow erosion, deposition and property changes
NASA Astrophysics Data System (ADS)
Chen, H. X.; Zhang, L. M.
2014-11-01
Debris flow material properties change during the initiation, transportation and deposition processes, which influences the runout characteristics of the debris flow. A quasi-three-dimensional depth-integrated numerical model, EDDA, is presented in this paper to simulate debris flow erosion, deposition and induced material property changes. The model considers changes in debris flow density, yield stress and dynamic viscosity during the flow process. The yield stress of debris flow mixture is determined at limit equilibrium using the Mohr-Coulomb equation, which is applicable to clear water flow, hyper-concentrated flow and fully developed debris flow. To assure numerical stability and computational efficiency at the same time, a variable time stepping algorithm is developed to solve the governing differential equations. Four numerical tests are conducted to validate the model. The first two tests involve a one-dimensional dam-break water flow and a one-dimensional debris flow with constant properties. The last two tests involve erosion and deposition, and the movement of multi-directional debris flows. The changes in debris flow mass and properties due to either erosion or deposition are shown to affect the runout characteristics significantly. The model is also applied to simulate a large-scale debris flow in Xiaojiagou Ravine to test the performance of the model in catchment-scale simulations. The results suggest that the model estimates well the volume, inundated area, and runout distance of the debris flow. The model is intended for use as a module in a real-time debris flow warning system.
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Bohling, Geoffrey C.; Butler, J.J.
2001-01-01
We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.
Liu, Jing; Huang, Kaiyu; Zhang, Guoxian
2017-04-20
We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements
NASA Astrophysics Data System (ADS)
Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.
2015-06-01
The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Yeom, Eunseop; Nam, Kweon-Ho; Paeng, Dong-Guk; Lee, Sang-Joon
2014-08-01
Ultrasound speckle image of blood is mainly attributed by red blood cells (RBCs) which tend to form RBC aggregates. RBC aggregates are separated into individual cells when the shear force is over a certain value. The dissociation of RBC aggregates has an influence on the performance of ultrasound speckle image velocimetry (SIV) technique in which a cross-correlation algorithm is applied to the speckle images to get the velocity field information. The present study aims to investigate the effect of the dissociation of RBC aggregates on the estimation quality of SIV technique. Ultrasound B-mode images were captured from the porcine blood circulating in a mock-up flow loop with varying flow rate. To verify the measurement performance of SIV technique, the centerline velocity measured by the SIV technique was compared with that measured by Doppler spectrograms. The dissociation of RBC aggregates was estimated by using decorrelation of speckle patterns in which the subsequent window was shifted as much as the speckle displacement to compensate decorrelation caused by in-plane loss of speckle patterns. The decorrelation of speckles is considerably increased according to shear rate. Its variations are different along the radial direction. Because the dissociation of RBC aggregates changes ultrasound speckles, the estimation quality of SIV technique is significantly correlated with the decorrelation of speckles. This degradation of measurement quality may be improved by increasing the data acquisition rate. This study would be useful for simultaneous measurement of hemodynamic and hemorheological information of blood flows using only speckle images. Copyright © 2014 Elsevier B.V. All rights reserved.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Implicit flux-split schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Walters, R. W.; Van Leer, B.
1985-01-01
Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka
2018-06-01
The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Direction of Arrival Estimation Using a Reconfigurable Array
2005-05-06
civilian world. Keywords: Direction-of-arrival Estimation MUSIC algorithm Reconfigurable Array Experimental Created by Neevia Personal...14. SUBJECT TERMS: Direction-of-arrival ; Estimation ; MUSIC algorithm ; Reconfigurable ; Array ; Experimental 16. PRICE CODE 17...9 1.5 MuSiC Algorithm
NASA Astrophysics Data System (ADS)
Yao, Yunjun; Liang, Shunlin; Yu, Jian; Zhao, Shaohua; Lin, Yi; Jia, Kun; Zhang, Xiaotong; Cheng, Jie; Xie, Xianhong; Sun, Liang; Wang, Xuanyu; Zhang, Lilin
2017-04-01
Accurate estimates of terrestrial latent heat of evaporation (LE) for different biomes are essential to assess energy, water and carbon cycles. Different satellite- based Priestley-Taylor (PT) algorithms have been developed to estimate LE in different biomes. However, there are still large uncertainties in LE estimates for different PT algorithms. In this study, we evaluated differences in estimating terrestrial water flux in different biomes from three satellite-based PT algorithms using ground-observed data from eight eddy covariance (EC) flux towers of China. The results reveal that large differences in daily LE estimates exist based on EC measurements using three PT algorithms among eight ecosystem types. At the forest (CBS) site, all algorithms demonstrate high performance with low root mean square error (RMSE) (less than 16 W/m2) and high squared correlation coefficient (R2) (more than 0.9). At the village (HHV) site, the ATI-PT algorithm has the lowest RMSE (13.9 W/m2), with bias of 2.7 W/m2 and R2 of 0.66. At the irrigated crop (HHM) site, almost all models algorithms underestimate LE, indicating these algorithms may not capture wet soil evaporation by parameterization of the soil moisture. In contrast, the SM-PT algorithm shows high values of R2 (comparable to those of ATI-PT and VPD-PT) at most other (grass, wetland, desert and Gobi) biomes. There are no obvious differences in seasonal LE estimation using MODIS NDVI and LAI at most sites. However, all meteorological or satellite-based water-related parameters used in the PT algorithm have uncertainties for optimizing water constraints. This analysis highlights the need to improve PT algorithms with regard to water constraints.
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time
NASA Astrophysics Data System (ADS)
Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.
2018-03-01
A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An improved algorithm for balanced POD through an analytic treatment of impulse response tails
NASA Astrophysics Data System (ADS)
Tu, Jonathan H.; Rowley, Clarence W.
2012-06-01
We present a modification of the balanced proper orthogonal decomposition (balanced POD) algorithm for systems with simple impulse response tails. In this new method, we use dynamic mode decomposition (DMD) to estimate the slowly decaying eigenvectors that dominate the long-time behavior of the direct and adjoint impulse responses. This is done using a new, low-memory variant of the DMD algorithm, appropriate for large datasets. We then formulate analytic expressions for the contribution of these eigenvectors to the controllability and observability Gramians. These contributions can be accounted for in the balanced POD algorithm by simply appending the impulse response snapshot matrices (direct and adjoint, respectively) with particular linear combinations of the slow eigenvectors. Aside from these additions to the snapshot matrices, the algorithm remains unchanged. By treating the tails analytically, we eliminate the need to run long impulse response simulations, lowering storage requirements and speeding up ensuing computations. To demonstrate its effectiveness, we apply this method to two examples: the linearized, complex Ginzburg-Landau equation, and the two-dimensional fluid flow past a cylinder. As expected, reduced-order models computed using an analytic tail match or exceed the accuracy of those computed using the standard balanced POD procedure, at a fraction of the cost.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models
NASA Technical Reports Server (NTRS)
Won, Mark J.
1999-01-01
A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.
NASA Astrophysics Data System (ADS)
Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui
2016-12-01
Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.
Harte, P.T.; Mack, Thomas J.
1992-01-01
Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Performance in population models for count data, part II: a new SAEM algorithm
Savic, Radojka; Lavielle, Marc
2009-01-01
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795
Image-based modeling of the flow transition from a Berea rock matrix to a propped fracture
NASA Astrophysics Data System (ADS)
Sanematsu, P.; Willson, C. S.; Thompson, K. E.
2013-12-01
In the past decade, new technologies and advances in horizontal hydraulic fracturing to extract oil and gas from tight rocks have raised questions regarding the physics of the flow and transport processes that occur during production. Many of the multi-dimensional details of flow from the rock matrix into the fracture and within the proppant-filled fracture are still unknown, which leads to unreliable well production estimations. In this work, we use x-ray computed micro tomography (XCT) to image 30/60 CarboEconoprop light weight ceramic proppant packed between berea sandstone cores (6 mm in diameter and ~2 mm in height) under 4000 psi (~28 MPa) loading stress. Image processing and segmentation of the 6 micron voxel resolution tomography dataset into solid and void space involved filtering with anisotropic diffusion (AD), segmentation using an indicator kriging (IK) algorithm, and removal of noise using a remove islands and holes program. Physically-representative pore network structures were generated from the XCT images, and a representative elementary volume (REV) was analyzed using both permeability and effective porosity convergence. Boundary conditions were introduced to mimic the flow patterns that occur when fluid moves from the matrix into the proppant-filled fracture and then downstream within the proppant-filled fracture. A smaller domain, containing Berea and proppants close to the interface, was meshed using an in-house unstructured meshing algorithm that allows different levels of refinement. Although most of this domain contains proppants, the Berea section accounted for the majority of the elements due to mesh refinement in this region of smaller pores. A finite element method (FEM) Stokes flow model was used to provide more detailed insights on the flow transition from rock matrix to fracture. Results using different pressure gradients are used to describe the flow transition from the Berea rock matrix to proppant-filled fracture.
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow
NASA Astrophysics Data System (ADS)
Yu, Lei; Xia, Mingliang; Xuan, Li
2013-10-01
The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.
Enhanced object-based tracking algorithm for convective rain storms and cells
NASA Astrophysics Data System (ADS)
Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick
2018-03-01
This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
High-order hydrodynamic algorithms for exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel Ray
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less
NASA Astrophysics Data System (ADS)
Cobos Arribas, Pedro; Monasterio Huelin Macia, Felix
2003-04-01
A FPGA based hardware implementation of the Santos-Victor optical flow algorithm, useful in robot guidance applications, is described in this paper. The system used to do contains an ALTERA FPGA (20K100), an interface with a digital camera, three VRAM memories to contain the data input and some output memories (a VRAM and a EDO) to contain the results. The system have been used previously to develop and test other vision algorithms, such as image compression, optical flow calculation with differential and correlation methods. The designed system let connect the digital camera, or the FPGA output (results of algorithms) to a PC, throw its Firewire or USB port. The problems take place in this occasion have motivated to adopt another hardware structure for certain vision algorithms with special requirements, that need a very hard code intensive processing.
Chhabra, P. S.; Lambe, A. T.; Canagaratna, M. R.; ...
2015-01-05
Recent developments in high-resolution time-of-flight chemical ionization mass spectrometry (HR-ToF-CIMS) have made it possible to directly detect atmospheric organic compounds in real time with high sensitivity and with little or no fragmentation, including low-volatility, highly oxygenated organic vapors that are precursors to secondary organic aerosol formation. Here, using ions identified by high-resolution spectra from an HR-ToF-CIMS with acetate reagent ion chemistry, we develop an algorithm to estimate the vapor pressures of measured organic acids. The algorithm uses identified ion formulas and calculated double bond equivalencies, information unavailable in quadrupole CIMS technology, as constraints for the number of possible oxygen-containing functionalmore » groups. The algorithm is tested with acetate chemical ionization mass spectrometry (acetate-CIMS) spectra of O 3 and OH oxidation products of α-pinene and naphthalene formed in a flow reactor with integrated OH exposures ranged from 1.2 × 10 11 to 9.7 × 10 11 molec s cm −3, corresponding to approximately 1.0 to 7.5 days of equivalent atmospheric oxidation. Measured gas-phase organic acids are similar to those previously observed in environmental chamber studies. For both precursors, we find that acetate-CIMS spectra capture both functionalization (oxygen addition) and fragmentation (carbon loss) as a function of OH exposure. The level of fragmentation is observed to increase with increased oxidation. The predicted condensed-phase secondary organic aerosol (SOA) average acid yields and O/C and H/C ratios agree within uncertainties with previous chamber and flow reactor measurements and ambient CIMS results. Furthermore, while acetate reagent ion chemistry is used to selectively measure organic acids, in principle this method can be applied to additional reagent ion chemistries depending on the application.« less
NASA Astrophysics Data System (ADS)
Zhenqing, L.; Sheng, C.; Chaoying, H.
2017-12-01
The core satellite of Global Precipitation Measurement (GPM) mission was launched on 27 February2014 with two core sensors dual-frequency precipitation radar (DPR) and microwave imager (GMI). The algorithm of Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG) blends the advantages of currently most popular satellite-based quantitative precipitation estimates (QPE) algorithms, i.e. TRMM Multi-satellite Precipitation Analysis (TMPA), Climate Prediction Center morphing technique (CMORPH) ADDIN EN.CITE ADDIN EN.CITE.DATA , Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS).Therefore, IMERG is deemed to be the state-of-art precipitation product with high spatio-temporal resolution of 0.1°/30min. The real-time and post real-time IMERG products are now available online at https://stormpps.gsfc.nasa.gov/storm. Early studies about assessment of IMERG with gauge observations or analysis products show that the current version GPM Day-1 product IMERG demonstrates promising performance over China [1], Europe [2], and United States [3]. However, few studies are found to study the IMERG' potentials of hydrologic utility.In this study, the real-time and final run post real-time IMERG products are hydrologically evaluated with gauge analysis product as reference over Nanliu River basin (Fig.1) in Southern China since March 2014 to February 2017 with Xinanjiang model. Statistics metrics Relative Bias (RB), Root-Mean-Squared Error (RMSE), Correlation Coefficient (CC), Probability Of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), and Nash-Sutcliffe (NSCE) index will be used to compare the stream flow simulated with IMERG to the observed stream flow. This timely hydrologic evaluation is expected to offer insights into IMERG' potentials in hydrologic utility and thus provide useful feedback to the IMERG algorithm developers and the hydrologic users.
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Lagrangian analysis by clustering. An example in the Nordic Seas.
NASA Astrophysics Data System (ADS)
Koszalka, Inga; Lacasce, Joseph H.
2010-05-01
We propose a new method for obtaining average velocities and eddy diffusivities from Lagrangian data. Rather than grouping the drifter-derived velocities in uniform geographical bins, as is commonly done, we group a specified number of nearest-neighbor velocities. This is done via a clustering algorithm operating on the instantaneous positions of the drifters. Thus it is the data distribution itself which determines the positions of the averages and the areal extent of the clusters. A major advantage is that because the number of members is essentially the same for all clusters, the statistical accuracy is more uniform than with geographical bins. We illustrate the technique using synthetic data from a stochastic model, employing a realistic mean flow. The latter is an accurate representation of the surface currents in the Nordic Seas and is strongly inhomogeneous in space. We use the clustering algorithm to extract the mean velocities and diffusivities (both of which are known from the stochastic model). We also compare the results to those obtained with fixed geographical bins. Clustering is more successful at capturing spatial variability of the mean flow and also improves convergence in the eddy diffusivity estimates. We discuss both the future prospects and shortcomings of the new method.
Algorithm For Hypersonic Flow In Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.
Supercomputer implementation of finite element algorithms for high speed compressible flows
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.
1986-01-01
Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.
Quantitative fluorescence angiography for neurosurgical interventions.
Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute
2013-06-01
Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
NASA Technical Reports Server (NTRS)
Deese, J. E.; Agarwal, R. K.
1989-01-01
Computational fluid dynamics has an increasingly important role in the design and analysis of aircraft as computer hardware becomes faster and algorithms become more efficient. Progress is being made in two directions: more complex and realistic configurations are being treated and algorithms based on higher approximations to the complete Navier-Stokes equations are being developed. The literature indicates that linear panel methods can model detailed, realistic aircraft geometries in flow regimes where this approximation is valid. As algorithms including higher approximations to the Navier-Stokes equations are developed, computer resource requirements increase rapidly. Generation of suitable grids become more difficult and the number of grid points required to resolve flow features of interest increases. Recently, the development of large vector computers has enabled researchers to attempt more complex geometries with Euler and Navier-Stokes algorithms. The results of calculations for transonic flow about a typical transport and fighter wing-body configuration using thin layer Navier-Stokes equations are described along with flow about helicopter rotor blades using both Euler/Navier-Stokes equations.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
City traffic flow breakdown prediction based on fuzzy rough set
NASA Astrophysics Data System (ADS)
Yang, Xu; Da-wei, Hu; Bing, Su; Duo-jia, Zhang
2017-05-01
In city traffic management, traffic breakdown is a very important issue, which is defined as a speed drop of a certain amount within a dense traffic situation. In order to predict city traffic flow breakdown accurately, in this paper, we propose a novel city traffic flow breakdown prediction algorithm based on fuzzy rough set. Firstly, we illustrate the city traffic flow breakdown problem, in which three definitions are given, that is, 1) Pre-breakdown flow rate, 2) Rate, density, and speed of the traffic flow breakdown, and 3) Duration of the traffic flow breakdown. Moreover, we define a hazard function to represent the probability of the breakdown ending at a given time point. Secondly, as there are many redundant and irrelevant attributes in city flow breakdown prediction, we propose an attribute reduction algorithm using the fuzzy rough set. Thirdly, we discuss how to predict the city traffic flow breakdown based on attribute reduction and SVM classifier. Finally, experiments are conducted by collecting data from I-405 Freeway, which is located at Irvine, California. Experimental results demonstrate that the proposed algorithm is able to achieve lower average error rate of city traffic flow breakdown prediction.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
NASA Astrophysics Data System (ADS)
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.
2014-01-01
Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B17B-GB and EMA-MGB RPD-boxplot results showed that the median RPDs across all streamgaging stations for the 10-, 1-, and 0.2-percent AEPs, computed using station skew, were approximately zero. As the AEP flow estimates decreased (that is, from 10 to 0.2 percent AEP) the variability in the RPDs increased, indicating that the AEP flow estimate was greater for EMA-MGB when compared to B17B-GB. There was only one RPD greater than 100 percent for the 10- and 1-percent AEP estimates, whereas 19 RPDs exceeded 100 percent for the 0.2-percent AEP. At streamgaging stations with low-outlier data, historical peak-flow data, or both, RPDs ranged from −84 to 262 percent for the 0.2-percent AEP flow estimate. When streamgaging stations were separated by the presence of historical peak-flow data (that is, no low outliers or censored peaks) or by low outlier peak-flow data (no historical data), the results showed that RPD variability was greatest for the 0.2-AEP flow estimates, indicating that the treatment of historical and (or) low-outlier data was different between methods and that method differences were most influential when estimating the less probable AEP flows (1, 0.5, and 0.2 percent). When regional skew information was weighted with the station skew, B17B-GB estimates were generally higher than the EMA-MGB estimates for any given AEP. This was related to the different regional skews and mean square error used in the weighting procedure for each flood frequency analysis. The B17B-GB weighted skew analysis used a more positive regional skew determined in USGS Water Supply Paper 2433 (Thomas and others, 1997), while the EMA-MGB analysis used a more negative regional skew with a lower mean square error determined from a Bayesian generalized least squares analysis. Regional groupings of streamgaging stations reflected differences in physiographic and climatic characteristics. Potentially influential low flows (PILFs) were more prevalent in arid regions of the State, and generally AEP flows were larger with EMA-MGB than with B17B-GB for gaging stations with PILFs. In most cases EMA-MGB curves would fit the largest floods more accurately than B17B-GB. In areas of the State with more baseflow, such as along the Mogollon Rim and the White Mountains, streamgaging stations generally had fewer PILFs and more positive skews, causing estimated AEP flows to be larger with B17B-GB than with EMA-MGB. The effect of including regional skew was similar for all regions, and the observed pattern was increasingly greater B17B-GB flows (more negative RPDs) with each decreasing AEP quantile. A variation on a goodness-of-fit test statistic was used to describe each method’s ability to fit the largest floods. The mean absolute percent difference between the measured peak flows and the log-Pearson Type 3 (LP3)-estimated flows, for each method, was averaged over the 90th, 75th, and 50th percentiles of peak-flow data at each site. In most percentile subsets, EMA-MGB on average had smaller differences (1 to 3 percent) between the observed and fitted value, suggesting that the EMA-MGB-LP3 distribution is fitting the observed peak-flow data more precisely than B17B-GB. The smallest EMA-MGB percent differences occurred for the greatest 10 percent (90th percentile) of the peak-flow data. When stations were analyzed by USGS NWIS peak flow qualification code groups, the stations with historical peak flows and no low outliers had average percent differences as high as 11 percent greater for B17B-GB, indicating that EMA-MGB utilized the historical information to fit the largest observed floods more accurately. A resampling procedure was used in which 1,000 random subsamples were drawn, each comprising one-half of the observed data. An LP3 distribution was fit to each subsample using B17B-GB and EMA-MGB methods, and the predicted 1-percent AEP flows were compared to those generated from distributions fit to the entire dataset. With station skew, the two methods were similar in the median percent difference, but with weighted skew EMA-MGB estimates were generally better. At two gages where B17B-GB appeared to perform better, a large number of peak flows were deemed to be PILFs by the MGB test, although they did not appear to depart significantly from the trend of the data (step or dogleg appearance). At two gages where EMA-MGB performed better, the MGB identified several PILFs that were affecting the fitted distribution of the B17B-GB method. Monte Carlo simulations were run for the LP3 distribution using different skews and with different assumptions about the expected number of historical peaks. The primary benefit of running Monte Carlo simulations is that the underlying distribution statistics are known, meaning that the true 1-percent AEP is known. The results showed that EMA-MGB performed as well or better in situations where the LP3 distribution had a zero or positive skew and historical information. When the skew for the LP3 distribution was negative, EMA-MGB performed significantly better than B17B-GB and EMA-MGB estimates were less biased by more closely estimating the true 1-percent AEP for 1, 2, and 10 historical flood scenarios.
Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel
NASA Technical Reports Server (NTRS)
Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.
2004-01-01
The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.
Flux-vector splitting algorithm for chain-rule conservation-law form
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.
1991-01-01
A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.
The threshold algorithm: Description of the methodology and new developments
NASA Astrophysics Data System (ADS)
Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian
2017-10-01
Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.
A block-based algorithm for the solution of compressible flows in rotor-stator combinations
NASA Technical Reports Server (NTRS)
Akay, H. U.; Ecer, A.; Beskok, A.
1990-01-01
A block-based solution algorithm is developed for the solution of compressible flows in rotor-stator combinations. The method allows concurrent solution of multiple solution blocks in parallel machines. It also allows a time averaged interaction at the stator-rotor interfaces. Numerical results are presented to illustrate the performance of the algorithm. The effect of the interaction between the stator and rotor is evaluated.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1991-01-01
An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
ERIC Educational Resources Information Center
Yang, Ji Seung; Cai, Li
2014-01-01
The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
EDDA 1.0: integrated simulation of debris flow erosion, deposition and property changes
NASA Astrophysics Data System (ADS)
Chen, H. X.; Zhang, L. M.
2015-03-01
Debris flow material properties change during the initiation, transportation and deposition processes, which influences the runout characteristics of the debris flow. A quasi-three-dimensional depth-integrated numerical model, EDDA (Erosion-Deposition Debris flow Analysis), is presented in this paper to simulate debris flow erosion, deposition and induced material property changes. The model considers changes in debris flow density, yield stress and dynamic viscosity during the flow process. The yield stress of the debris flow mixture determined at limit equilibrium using the Mohr-Coulomb equation is applicable to clear water flow, hyper-concentrated flow and fully developed debris flow. To assure numerical stability and computational efficiency at the same time, an adaptive time stepping algorithm is developed to solve the governing differential equations. Four numerical tests are conducted to validate the model. The first two tests involve a one-dimensional debris flow with constant properties and a two-dimensional dam-break water flow. The last two tests involve erosion and deposition, and the movement of multi-directional debris flows. The changes in debris flow mass and properties due to either erosion or deposition are shown to affect the runout characteristics significantly. The model is also applied to simulate a large-scale debris flow in Xiaojiagou Ravine to test the performance of the model in catchment-scale simulations. The results suggest that the model estimates well the volume, inundated area, and runout distance of the debris flow. The model is intended for use as a module in a real-time debris flow warning system.
A sensitivity equation approach to shape optimization in fluid flows
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1994-01-01
A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less
Efficient algorithms for single-axis attitude estimation
NASA Technical Reports Server (NTRS)
Shuster, M. D.
1981-01-01
The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
Ion Thruster Discharge Performance Per Magnetic Field Topography
NASA Technical Reports Server (NTRS)
Wirz, Richard E.; Goebel, Dan
2006-01-01
DC-ION is a detailed computational model for predicting the plasma characteristics of rain-cusp ion thrusters. The advanced magnetic field meshing algorithm used by DC-ION allows precise treatment of the secondary electron flow. This capability allows self-consistent estimates of plasma potential that improves the overall consistency of the results of the discharge model described in Reference [refJPC05mod1]. Plasma potential estimates allow the model to predict the onset of plasma instabilities, and important shortcoming of the previous model for optimizing the design of discharge chambers. A magnetic field mesh simplifies the plasma flow calculations, for both the ions and the secondary electrons, and significantly reduces numerical diffusion that can occur with meshes not aligned with the magnetic field. Comparing the results of this model to experimental data shows that the behavior of the primary electrons, and the precise manner of their confinement, dictates the fundamental efficiency of ring-cusp. This correlation is evident in simulations of the conventionally sized NSTAR thruster (30 cm diameter) and the miniature MiXI thruster (3 cm diameter).
Magnitude and Frequency of Floods for Urban and Small Rural Streams in Georgia, 2008
Gotvald, Anthony J.; Knaak, Andrew E.
2011-01-01
A study was conducted that updated methods for estimating the magnitude and frequency of floods in ungaged urban basins in Georgia that are not substantially affected by regulation or tidal fluctuations. Annual peak-flow data for urban streams from September 2008 were analyzed for 50 streamgaging stations (streamgages) in Georgia and 6 streamgages on adjacent urban streams in Florida and South Carolina having 10 or more years of data. Flood-frequency estimates were computed for the 56 urban streamgages by fitting logarithms of annual peak flows for each streamgage to a Pearson Type III distribution. Additionally, basin characteristics for the streamgages were computed by using a geographical information system and computer algorithms. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged urban basins in Georgia. In addition to the 56 urban streamgages, 171 rural streamgages were included in the regression analysis to maintain continuity between flood estimates for urban and rural basins as the basin characteristics pertaining to urbanization approach zero. Because 21 of the rural streamgages have drainage areas less than 1 square mile, the set of equations developed for this study can also be used for estimating small ungaged rural streams in Georgia. Flood-frequency estimates and basin characteristics for 227 streamgages were combined to form the final database used in the regional regression analysis. Four hydrologic regions were developed for Georgia. The final equations are functions of drainage area and percentage of impervious area for three of the regions and drainage area, percentage of developed land, and mean basin slope for the fourth region. Average standard errors of prediction for these regression equations range from 20.0 to 74.5 percent.
Stochastic stability of sigma-point Unscented Predictive Filter.
Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong
2015-07-01
In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Herrera-Vega, Javier; Montero-Hernández, Samuel; Tachtsidis, Ilias; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe
2017-11-01
Accurate estimation of brain haemodynamics parameters such as cerebral blood flow and volume as well as oxygen consumption i.e. metabolic rate of oxygen, with funcional near infrared spectroscopy (fNIRS) requires precise characterization of light propagation through head tissues. An anatomically realistic forward model of the human adult head with unprecedented detailed specification of the 5 scalp sublayers to account for blood irrigation in the connective tissue layer is introduced. The full model consists of 9 layers, accounts for optical properties ranging from 750nm to 950nm and has a voxel size of 0.5mm. The whole model is validated comparing the predicted remitted spectra, using Monte Carlo simulations of radiation propagation with 108 photons, against continuous wave (CW) broadband fNIRS experimental data. As the true oxy- and deoxy-hemoglobin concentrations during acquisition are unknown, a genetic algorithm searched for the vector of parameters that generates a modelled spectrum that optimally fits the experimental spectrum. Differences between experimental and model predicted spectra was quantified using the Root mean square error (RMSE). RMSE was 0.071 +/- 0.004, 0.108 +/- 0.018 and 0.235+/-0.015 at 1, 2 and 3cm interoptode distance respectively. The parameter vector of absolute concentrations of haemoglobin species in scalp and cortex retrieved with the genetic algorithm was within histologically plausible ranges. The new model capability to estimate the contribution of the scalp blood flow shall permit incorporating this information to the regularization of the inverse problem for a cleaner reconstruction of brain hemodynamics.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
NASA Astrophysics Data System (ADS)
Cai, Zhonglun; Chen, Peng; Angland, David; Zhang, Xin
2014-03-01
A novel iterative learning control (ILC) algorithm was developed and applied to an active flow control problem. The technique uses pulsed air jets to delay flow separation on a two-element high-lift wing. The ILC algorithm uses position-based pressure measurements to update the actuation. The method was experimentally tested on a wing model in a 0.9 m × 0.6 m low-speed wind tunnel at the University of Southampton. Compressed air and fast switching solenoid valves were used as actuators to excite the flow, and the pressure distribution around the chord of the wing was measured as a feedback control signal for the ILC controller. Experimental results showed that the actuation was able to delay the separation and increase the lift by approximately 10%-15%. By using the ILC algorithm, the controller was able to find the optimum control input and maintain the improvement despite sudden changes of the separation position.
Compressible, multiphase semi-implicit method with moment of fluid interface representation
Jemison, Matthew; Sussman, Mark; Arienti, Marco
2014-09-16
A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less
NASA Astrophysics Data System (ADS)
Clarke, John R.; Southerland, David
1999-07-01
Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.
NASA Astrophysics Data System (ADS)
Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André
2006-12-01
To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.
An efficient iteration strategy for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Walters, R. W.; Dwoyer, D. L.
1985-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.
Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts.
Baker, Wesley B; Parthasarathy, Ashwin B; Ko, Tiffany S; Busch, David R; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C; Durduran, Turgut; Greenberg, Joel H; Kung, David K; Yodh, Arjun G
2015-07-01
We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm's ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy.
Fluid-dynamic design optimization of hydraulic proportional directional valves
NASA Astrophysics Data System (ADS)
Amirante, Riccardo; Catalano, Luciano Andrea; Poloni, Carlo; Tamburrano, Paolo
2014-10-01
This article proposes an effective methodology for the fluid-dynamic design optimization of the sliding spool of a hydraulic proportional directional valve: the goal is the minimization of the flow force at a prescribed flow rate, so as to reduce the required opening force while keeping the operation features unchanged. A full three-dimensional model of the flow field within the valve is employed to accurately predict the flow force acting on the spool. A theoretical analysis, based on both the axial momentum equation and flow simulations, is conducted to define the design parameters, which need to be properly selected in order to reduce the flow force without significantly affecting the flow rate. A genetic algorithm, coupled with a computational fluid dynamics flow solver, is employed to minimize the flow force acting on the valve spool at the maximum opening. A comparison with a typical single-objective optimization algorithm is performed to evaluate performance and effectiveness of the employed genetic algorithm. The optimized spool develops a maximum flow force which is smaller than that produced by the commercially available valve, mainly due to some major modifications occurring in the discharge section. Reducing the flow force and thus the electromagnetic force exerted by the solenoid actuators allows the operational range of direct (single-stage) driven valves to be enlarged.
CUFID-query: accurate network querying through random walk based network flow estimation.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2017-12-28
Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive performance evaluation based on biological networks with known functional modules, we show that CUFID-query outperforms the existing state-of-the-art algorithms in terms of prediction accuracy and biological significance of the predictions.
NASA Astrophysics Data System (ADS)
Liu, Laqun; Wang, Huihui; Guo, Fan; Zou, Wenkang; Liu, Dagang
2017-04-01
Based on the 3-dimensional Particle-In-Cell (PIC) code CHIPIC3D, with a new circuit boundary algorithm we developed, a conical magnetically insulated transmission line (MITL) with a 1.0-MV linear transformer driver (LTD) is explored numerically. The values of switch jitter time of LTD are critical parameters for the system, which are difficult to be measured experimentally. In this paper, these values are obtained by comparing the PIC results with experimental data of large diode-gap MITL. By decreasing the diode gap, we find that all PIC results agree well with experimental data only if MITL works on self-limited flow no matter how large the diode gap is. However, when the diode gap decreases to a threshold, the self-limited flow would transfer to a load-limited flow. In this situation, PIC results no longer agree with experimental data anymore due to the anode plasma expansion in the diode load. This disagreement is used to estimate the plasma expansion speed.
Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.
2012-01-01
This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288
Computation of multi-dimensional viscous supersonic jet flow
NASA Technical Reports Server (NTRS)
Kim, Y. N.; Buggeln, R. C.; Mcdonald, H.
1986-01-01
A new method has been developed for two- and three-dimensional computations of viscous supersonic flows with embedded subsonic regions adjacent to solid boundaries. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases relevant to internal supersonic flow is presented and compared with data. Comparison between data and computation are in general excellent thus indicating that the computational technique has great promise as a tool for calculating supersonic flow with embedded subsonic regions. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
A comparison of upwind schemes for computation of three-dimensional hypersonic real-gas flows
NASA Technical Reports Server (NTRS)
Gerbsch, R. A.; Agarwal, R. K.
1992-01-01
The method of Suresh and Liou (1992) is extended, and the resulting explicit noniterative upwind finite-volume algorithm is applied to the integration of 3D parabolized Navier-Stokes equations to model 3D hypersonic real-gas flowfields. The solver is second-order accurate in the marching direction and employs flux-limiters to make the algorithm second-order accurate, with total variation diminishing in the cross-flow direction. The algorithm is used to compute hypersonic flow over a yawed cone and over the Ames All-Body Hypersonic Vehicle. The solutions obtained agree well with other computational results and with experimental data.
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Comparison of Conventional and ANN Models for River Flow Forecasting
NASA Astrophysics Data System (ADS)
Jain, A.; Ganti, R.
2011-12-01
Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. River flow is generally estimated using time series or rainfall-runoff models. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been extensively adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conventional models. In this paper, a comparative study has been carried out for river flow forecasting using the conventional and ANN models. Among the conventional models, multiple linear, and non linear regression, and time series models of auto regressive (AR) type have been developed. Feed forward neural network model structure trained using the back propagation algorithm, a gradient search method, was adopted. The daily river flow data derived from Godavari Basin @ Polavaram, Andhra Pradesh, India have been employed to develop all the models included here. Two inputs, flows at two past time steps, (Q(t-1) and Q(t-2)) were selected using partial auto correlation analysis for forecasting flow at time t, Q(t). A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that the regression and AR models performed comparably, and the ANN model performed the best amongst all the models investigated in this study. It is concluded that ANN model should be adopted in real catchments for hydrological modeling and forecasting.
An improved optical flow tracking technique for real-time MR-guided beam therapies in moving organs
NASA Astrophysics Data System (ADS)
Zachiu, C.; Papadakis, N.; Ries, M.; Moonen, C.; de Senneville, B. Denis
2015-12-01
Magnetic resonance (MR) guided high intensity focused ultrasound and external beam radiotherapy interventions, which we shall refer to as beam therapies/interventions, are promising techniques for the non-invasive ablation of tumours in abdominal organs. However, therapeutic energy delivery in these areas becomes challenging due to the continuous displacement of the organs with respiration. Previous studies have addressed this problem by coupling high-framerate MR-imaging with a tracking technique based on the algorithm proposed by Horn and Schunck (H and S), which was chosen due to its fast convergence rate and highly parallelisable numerical scheme. Such characteristics were shown to be indispensable for the real-time guidance of beam therapies. In its original form, however, the algorithm is sensitive to local grey-level intensity variations not attributed to motion such as those that occur, for example, in the proximity of pulsating arteries. In this study, an improved motion estimation strategy which reduces the impact of such effects is proposed. Displacements are estimated through the minimisation of a variation of the H and S functional for which the quadratic data fidelity term was replaced with a term based on the linear L1norm, resulting in what we have called an L2-L1 functional. The proposed method was tested in the livers and kidneys of two healthy volunteers under free-breathing conditions, on a data set comprising 3000 images equally divided between the volunteers. The results show that, compared to the existing approaches, our method demonstrates a greater robustness to local grey-level intensity variations introduced by arterial pulsations. Additionally, the computational time required by our implementation make it compatible with the work-flow of real-time MR-guided beam interventions. To the best of our knowledge this study was the first to analyse the behaviour of an L1-based optical flow functional in an applicative context: real-time MR-guidance of beam therapies in moving organs.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
Computing Cooling Flows in Turbines
NASA Technical Reports Server (NTRS)
Gauntner, J.
1986-01-01
Algorithm developed for calculating both quantity of compressor bleed flow required to cool turbine and resulting decrease in efficiency due to cooling air injected into gas stream. Program intended for use with axial-flow, air-breathing, jet-propulsion engines with variety of airfoil-cooling configurations. Algorithm results compared extremely well with figures given by major engine manufacturers for given bulk-metal temperatures and cooling configurations. Program written in FORTRAN IV for batch execution.
Kawaguchi, A; Linde, L M; Imachi, T; Mizuno, H; Akutsu, H
1983-12-01
To estimate the left atrial volume (LAV) and pulmonary blood flow in patients with congenital heart disease (CHD), we employed two-dimensional echocardiography (TDE). The LAV was measured in dimensions other than those obtained in conventional M-mode echocardiography (M-mode echo). Mathematical and geometrical models for LAV calculation using the standard long-axis, short-axis and apical four-chamber planes were devised and found to be reliable in a preliminary study using porcine heart preparations, although length (10%), area (20%) and volume (38%) were significantly and consistently underestimated with echocardiography. Those models were then applied and correlated with angiocardiograms (ACG) in 25 consecutive patients with suspected CHD. In terms of the estimation of the absolute LAV, accuracy seemed commensurate with the number of the dimensions measured. The correlation between data obtained by TDE and ACG varied with changing hemodynamics such as cardiac cycle, absolute LAV and presence or absence of volume load. The left atrium was found to become spherical and progressively underestimated with TDE at ventricular endsystole, in larger LAV and with increased volume load. Since this tendency became less pronounced in measuring additional dimensions, reliable estimation of the absolute LAV and volume load was possible when 2 or 3 dimensions were measured. Among those calculation models depending on 2 or 3 dimensional measurements, there was only a small difference in terms of accuracy and predictability, although algorithm used varied from one model to another. This suggests that accurate cross-sectional area measurement is critically important for volume estimation rather than any particular algorithm involved. Cross-sectional area measurement by TDE integrated into a three dimensional equivalent allowed a reliable estimate of the LAV or volume load in a variety of hemodynamic situations where M-mode echo was not reliable.
NASA Astrophysics Data System (ADS)
Tinoco, R. O.; Goldstein, E. B.; Coco, G.
2016-12-01
We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.
Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc
2014-01-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632
Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc
2013-11-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.