Sample records for velocity estimation algorithms

  1. Ultrasound Algorithm Derivation for Soil Moisture Content Estimation

    NASA Technical Reports Server (NTRS)

    Belisle, W.R.; Metzl, R.; Choi, J.; Aggarwal, M. D.; Coleman, T.

    1997-01-01

    Soil moisture content can be estimated by evaluating the velocity at which sound waves travel through a known volume of solid material. This research involved the development of three soil algorithms relating the moisture content to the velocity at which sound waves moved through dry and moist media. Pressure and shear wave propagation equations were used in conjunction with soil property descriptions to derive algorithms appropriate for describing the effects of moisture content variation on the velocity of sound waves in soils with and without complete soil pore water volumes, An elementary algorithm was used to estimate soil moisture contents ranging from 0.08 g/g to 0.5 g/g from sound wave velocities ranging from 526 m/s to 664 m/s. Secondary algorithms were also used to estimate soil moisture content from sound wave velocities through soils with pores that were filled predominantly with air or water.

  2. Thermal particle image velocity estimation of fire plume flow

    Treesearch

    Xiangyang Zhou; Lulu Sun; Shankar Mahalingam; David R. Weise

    2003-01-01

    For the purpose of studying wildfire spread in living vegetation such as chaparral in California, a thermal particle image velocity (TPIV) algorithm for nonintrusively measuring flame gas velocities through thermal infrared (IR) imagery was developed. By tracing thermal particles in successive digital IR images, the TPIV algorithm can estimate the velocity field in a...

  3. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles.

    PubMed

    Nam, Kanghyun

    2015-11-11

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.

  4. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles

    PubMed Central

    Nam, Kanghyun

    2015-01-01

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246

  5. Global velocity constrained cloud motion prediction for short-term solar forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping

    2016-09-01

    Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.

  6. Walking Distance Estimation Using Walking Canes with Inertial Sensors

    PubMed Central

    Suh, Young Soo

    2018-01-01

    A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971

  7. Validating precision estimates in horizontal wind measurements from a Doppler lidar

    DOE PAGES

    Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...

    2017-03-30

    Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less

  8. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units.

    PubMed

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-12-11

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.

  9. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    PubMed Central

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-01-01

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429

  10. Egomotion estimation with optic flow and air velocity sensors.

    PubMed

    Rutkowski, Adam J; Miller, Mikel M; Quinn, Roger D; Willis, Mark A

    2011-06-01

    We develop a method that allows a flyer to estimate its own motion (egomotion), the wind velocity, ground slope, and flight height using only inputs from onboard optic flow and air velocity sensors. Our artificial algorithm demonstrates how it could be possible for flying insects to determine their absolute egomotion using their available sensors, namely their eyes and wind sensitive hairs and antennae. Although many behaviors can be performed by only knowing the direction of travel, behavioral experiments indicate that odor tracking insects are able to estimate the wind direction and control their absolute egomotion (i.e., groundspeed). The egomotion estimation method that we have developed, which we call the opto-aeronautic algorithm, is tested in a variety of wind and ground slope conditions using a video recorded flight of a moth tracking a pheromone plume. Over all test cases that we examined, the algorithm achieved a mean absolute error in height of 7% or less. Furthermore, our algorithm is suitable for the navigation of aerial vehicles in environments where signals from the Global Positioning System are unavailable.

  11. Rayleigh-wave dispersive energy imaging and mode separating by high-resolution linear Radon transform

    USGS Publications Warehouse

    Luo, Y.; Xu, Y.; Liu, Q.; Xia, J.

    2008-01-01

    In recent years, multichannel analysis of surface waves (MASW) has been increasingly used for obtaining vertical shear-wave velocity profiles within near-surface materials. MASW uses a multichannel recording approach to capture the time-variant, full-seismic wavefield where dispersive surface waves can be used to estimate near-surface S-wave velocity. The technique consists of (1) acquisition of broadband, high-frequency ground roll using a multichannel recording system; (2) efficient and accurate algorithms that allow the extraction and analysis of 1D Rayleigh-wave dispersion curves; (3) stable and efficient inversion algorithms for estimating S-wave velocity profiles; and (4) construction of the 2D S-wave velocity field map.

  12. Vehicle longitudinal velocity estimation during the braking process using unknown input Kalman filter

    NASA Astrophysics Data System (ADS)

    Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad

    2015-10-01

    In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.

  13. Estimation of River Bathymetry from ATI-SAR Data

    NASA Astrophysics Data System (ADS)

    Almeida, T. G.; Walker, D. T.; Farquharson, G.

    2013-12-01

    A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.

  14. Spectral fitting inversion of low-frequency normal modes with self-coupling and cross-coupling of toroidal and spheroidal multiplets: numerical experiments to estimate the isotropic and anisotropic velocity structures

    NASA Astrophysics Data System (ADS)

    Oda, Hitoshi

    2016-06-01

    The aspherical structure of the Earth is described in terms of lateral heterogeneity and anisotropy of the P- and S-wave velocities, density heterogeneity, ellipticity and rotation of the Earth and undulation of the discontinuity interfaces of the seismic wave velocities. Its structure significantly influences the normal mode spectra of the Earth's free oscillation in the form of cross-coupling between toroidal and spheroidal multiplets and self-coupling between the singlets forming them. Thus, the aspherical structure must be conversely estimated from the free oscillation spectra influenced by the cross-coupling and self-coupling. In the present study, we improve a spectral fitting inversion algorithm which was developed in a previous study to retrieve the global structures of the isotropic and anisotropic velocities of the P and S waves from the free oscillation spectra. The main improvement is that the geographical distribution of the intensity of the S-wave azimuthal anisotropy is represented by a nonlinear combination of structure coefficients for the anisotropic velocity structure, whereas in the previous study it was expanded into a generalized spherical harmonic series. Consequently, the improved inversion algorithm reduces the number of unknown parameters that must be determined compared to the previous inversion algorithm and employs a one-step inversion method by which the structure coefficients for the isotropic and anisotropic velocities are directly estimated from the fee oscillation spectra. The applicability of the improved inversion is examined by several numerical experiments using synthetic spectral data, which are produced by supposing a variety of isotropic and anisotropic velocity structures, earthquake source parameters and station-event pairs. Furthermore, the robustness of the inversion algorithm is investigated with respect to the back-ground noise contaminating the spectral data as well as truncating the series expansions by finite terms to represent the three-dimensional velocity structures. As a result, it is shown that the improved inversion can estimate not only the isotropic and anisotropic velocity structures but also the depth extent of the anisotropic regions in the Earth. In particular, the cross-coupling modes are essential to correctly estimate the isotropic and anisotropic velocity structures from the normal mode spectra. In addition, we argue that the effect of the seismic anisotropy is not negligible when estimating only the isotropic velocity structure from the spheroidal mode spectra.

  15. Using Collision Cones to Asses Biological Deconiction Methods

    NASA Astrophysics Data System (ADS)

    Brace, Natalie

    For autonomous vehicles to navigate the world as efficiently and effectively as biological species, improvements are needed in terms of control strategies and estimation algorithms. Reactive collision avoidance is one specific area where biological systems outperform engineered algorithms. To better understand the discrepancy between engineered and biological systems, a collision avoidance algorithm was applied to frames of trajectory data from three biological species (Myotis velifer, Hirundo rustica, and Danio aequipinnatus). The algorithm uses information that can be sensed through visual cues (relative position and velocity) to define collision cones which are used to determine if agents are on a collision course and if so, to find a safe velocity that requires minimal deviation from the original velocity for each individual agent. Two- and three-dimensional versions of the algorithm with constant speed and maximum speed velocity requirements were considered. The obstacles provided to the algorithm were determined by the sensing range in terms of either metric or topological distance. The calculated velocities showed good correlation with observed velocities over the range of sensing parameters, indicating that the algorithm is a good basis for comparison and could potentially be improved with further study.

  16. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  17. Utilization of high-frequency Rayleigh waves in near-surface geophysics

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.

    2004-01-01

    Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.

  18. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search

    PubMed Central

    Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.

    2017-01-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532

  19. Improved Shear Wave Group Velocity Estimation Method Based on Spatiotemporal Peak and Thresholding Motion Search.

    PubMed

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W

    2017-04-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.

  20. On protecting the planet against cosmic attack: Ultrafast real-time estimate of the asteroid's radial velocity

    NASA Astrophysics Data System (ADS)

    Zakharchenko, V. D.; Kovalenko, I. G.

    2014-05-01

    A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.

  1. MER-DIMES : a planetary landing application of computer vision

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  2. Improved Regional Seismic Event Locations Using 3-D Velocity Models

    DTIC Science & Technology

    1999-12-15

    regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver

  3. C-plane Reconstructions from Sheaf Acquisition for Ultrasound Electrode Vibration Elastography.

    PubMed

    Ingle, Atul; Varghese, Tomy

    2014-09-03

    This paper presents a novel algorithm for reconstructing and visualizing ablated volumes using radiofrequency ultrasound echo data acquired with the electrode vibration elastography approach. The ablation needle is vibrated using an actuator to generate shear wave pulses that are tracked in the ultrasound image plane at different locations away from the needle. This data is used for reconstructing shear wave velocity maps for each imaging plane. A C-plane reconstruction algorithm is proposed which estimates shear wave velocity values on a collection of transverse planes that are perpendicular to the imaging planes. The algorithm utilizes shear wave velocity maps from different imaging planes that share a common axis of intersection. These C-planes can be used to generate a 3D visualization of the ablated region. Experimental validation of this approach was carried out using data from a tissue mimicking phantom. The shear wave velocity estimates were within 20% of those obtained from a clinical scanner, and a contrast of over 4 dB was obtained between the stiff and soft regions of the phantom.

  4. Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator

    NASA Technical Reports Server (NTRS)

    Lottman, B.; Frehlich, R.

    1997-01-01

    The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.

  5. Analysis of superimposed ultrasonic guided waves in long bones by the joint approximate diagonalization of eigen-matrices algorithm.

    PubMed

    Song, Xiaojun; Ta, Dean; Wang, Weiqi

    2011-10-01

    The parameters of ultrasonic guided waves (GWs) are very sensitive to mechanical and structural changes in long cortical bones. However, it is a challenge to obtain the group velocity and other parameters of GWs because of the presence of mixed multiple modes. This paper proposes a blind identification algorithm using the joint approximate diagonalization of eigen-matrices (JADE) and applies it to the separation of superimposed GWs in long bones. For the simulation case, the velocity of the single mode was calculated after separation. A strong agreement was obtained between the estimated velocity and the theoretical expectation. For the experiments in bovine long bones, by using the calculated velocity and a theoretical model, the cortical thickness (CTh) was obtained. For comparison with the JADE approach, an adaptive Gaussian chirplet time-frequency (ACGTF) method was also used to estimate the CTh. The results showed that the mean error of the CTh acquired by the JADE approach was 4.3%, which was smaller than that of the ACGTF method (13.6%). This suggested that the JADE algorithm may be used to separate the superimposed GWs and that the JADE algorithm could potentially be used to evaluate long bones. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  6. Moving target parameter estimation of SAR after two looks cancellation

    NASA Astrophysics Data System (ADS)

    Gan, Rongbing; Wang, Jianguo; Gao, Xiang

    2005-11-01

    Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.

  7. Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS

    NASA Astrophysics Data System (ADS)

    Bang, Eugene; Lee, Jiyun

    2013-11-01

    ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.

  8. Adaptive Trajectory Tracking of Nonholonomic Mobile Robots Using Vision-Based Position and Velocity Estimation.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu

    2018-02-01

    Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.

  9. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  10. An algorithm to estimate unsteady and quasi-steady pressure fields from velocity field measurements.

    PubMed

    Dabiri, John O; Bose, Sanjeeb; Gemmell, Brad J; Colin, Sean P; Costello, John H

    2014-02-01

    We describe and characterize a method for estimating the pressure field corresponding to velocity field measurements such as those obtained by using particle image velocimetry. The pressure gradient is estimated from a time series of velocity fields for unsteady calculations or from a single velocity field for quasi-steady calculations. The corresponding pressure field is determined based on median polling of several integration paths through the pressure gradient field in order to reduce the effect of measurement errors that accumulate along individual integration paths. Integration paths are restricted to the nodes of the measured velocity field, thereby eliminating the need for measurement interpolation during this step and significantly reducing the computational cost of the algorithm relative to previous approaches. The method is validated by using numerically simulated flow past a stationary, two-dimensional bluff body and a computational model of a three-dimensional, self-propelled anguilliform swimmer to study the effects of spatial and temporal resolution, domain size, signal-to-noise ratio and out-of-plane effects. Particle image velocimetry measurements of a freely swimming jellyfish medusa and a freely swimming lamprey are analyzed using the method to demonstrate the efficacy of the approach when applied to empirical data.

  11. Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Parker, Kevin J.

    2007-03-01

    This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.

  12. A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data

    NASA Astrophysics Data System (ADS)

    Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.

    2016-09-01

    Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.

  13. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  14. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    NASA Astrophysics Data System (ADS)

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  15. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  16. Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data

    NASA Astrophysics Data System (ADS)

    Singh, Vishwajit

    2016-04-01

    This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.

  17. Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints

    NASA Astrophysics Data System (ADS)

    Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea

    2018-05-01

    This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.

  18. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  19. Distributed finite-time containment control for double-integrator multiagent systems.

    PubMed

    Wang, Xiangyu; Li, Shihua; Shi, Peng

    2014-09-01

    In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.

  20. Control Software for a High-Performance Telerobot

    NASA Technical Reports Server (NTRS)

    Kline-Schoder, Robert J.; Finger, William

    2005-01-01

    A computer program for controlling a high-performance, force-reflecting telerobot has been developed. The goal in designing a telerobot-control system is to make the velocity of the slave match the master velocity, and the environmental force on the master match the force on the slave. Instability can arise from even small delays in propagation of signals between master and slave units. The present software, based on an impedance-shaping algorithm, ensures stability even in the presence of long delays. It implements a real-time algorithm that processes position and force measurements from the master and slave and represents the master/slave communication link as a transmission line. The algorithm also uses the history of the control force and the slave motion to estimate the impedance of the environment. The estimate of the impedance of the environment is used to shape the controlled slave impedance to match the transmission-line impedance. The estimate of the environmental impedance is used to match the master and transmission-line impedances and to estimate the slave/environment force in order to present that force immediately to the operator via the master unit.

  1. Road-Aided Ground Slowly Moving Target 2D Motion Estimation for Single-Channel Synthetic Aperture Radar.

    PubMed

    Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian

    2016-03-16

    To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.

  2. Estimation of velocity fluctuation in internal combustion engine exhaust systems through beamforming techniques

    NASA Astrophysics Data System (ADS)

    Piñero, G.; Vergara, L.; Desantes, J. M.; Broatch, A.

    2000-11-01

    The knowledge of the particle velocity fluctuations associated with acoustic pressure oscillation in the exhaust system of internal combustion engines may represent a powerful aid in the design of such systems, from the point of view of both engine performance improvement and exhaust noise abatement. However, usual velocity measurement techniques, even if applicable, are not well suited to the aggressive environment existing in exhaust systems. In this paper, a method to obtain a suitable estimate of velocity fluctuations is proposed, which is based on the application of spatial filtering (beamforming) techniques to instantaneous pressure measurements. Making use of simulated pressure-time histories, several algorithms have been checked by comparison between the simulated and the estimated velocity fluctuations. Then, problems related to the experimental procedure and associated with the proposed methodology are addressed, making application to measurements made in a real exhaust system. The results indicate that, if proper care is taken when performing the measurements, the application of beamforming techniques gives a reasonable estimate of the velocity fluctuations.

  3. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    USGS Publications Warehouse

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  4. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    NASA Astrophysics Data System (ADS)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.

  5. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  6. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  7. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  8. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.

  9. A double-gaussian, percentile-based method for estimating maximum blood flow velocity.

    PubMed

    Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D

    2013-11-01

    Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.

  10. Comparative analysis of algorithms for lunar landing control

    NASA Astrophysics Data System (ADS)

    Zhukov, B. I.; Likhachev, V. N.; Sazonov, V. V.; Sikharulidze, Yu. G.; Tuchin, A. G.; Tuchin, D. A.; Fedotov, V. P.; Yaroshevskii, V. S.

    2015-11-01

    For the descent from the pericenter of a prelanding circumlunar orbit a comparison of three algorithms for the control of lander motion is performed. These algorithms use various combinations of terminal and programmed control in a trajectory including three parts: main braking, precision braking, and descent with constant velocity. In the first approximation, autonomous navigational measurements are taken into account and an estimate of the disturbances generated by movement of the fuel in the tanks was obtained. Estimates of the accuracy for landing placement, fuel consumption, and performance of the conditions for safe lunar landing are obtained.

  11. Velocity Estimate Following Air Data System Failure

    DTIC Science & Technology

    2008-03-01

    39 Figure 3.3. Sampled Two Vector Approach .................................................................... 40 Figure 3.4...algorithm design in terms of reference frames, equations of motion, and velocity triangles describing the vector relationship between airspeed, wind speed...2.2.1 Reference Frames The flight of an aircraft through the air mass can be described in specific coordinate systems [ Nelson 1998]. To determine how

  12. Comparing geophysical measurements to theoretical estimates for soil mixtures at low pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wildenschild, D; Berge, P A; Berryman, K G

    1999-01-15

    The authors obtained good estimates of measured velocities of sand-peat samples at low pressures by using a theoretical method, the self-consistent theory of Berryman (1980), using sand and porous peat to represent the microstructure of the mixture. They were unable to obtain useful estimates with several other theoretical approaches, because the properties of the quartz, air and peat components of the samples vary over several orders of magnitude. Methods that are useful for consolidated rock cannot be applied directly to unconsolidated materials. Instead, careful consideration of microstructure is necessary to adapt the methods successfully. Future work includes comparison of themore » measured velocity values to additional theoretical estimates, investigation of Vp/Vs ratios and wave amplitudes, as well as modeling of dry and saturated sand-clay mixtures (e.g., Bonner et al., 1997, 1998). The results suggest that field data can be interpreted by comparing laboratory measurements of soil velocities to theoretical estimates of velocities in order to establish a systematic method for predicting velocities for a full range of sand-organic material mixtures at various pressures. Once the theoretical relationship is obtained, it can be used to estimate the soil composition at various depths from field measurements of seismic velocities. Additional refining of the method for relating velocities to soil characteristics is useful for development inversion algorithms.« less

  13. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  14. Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images

    NASA Astrophysics Data System (ADS)

    Ely, G.; Malcolm, A. E.; Poliannikov, O. V.

    2017-12-01

    Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.

  15. Coherent lidar design and performance verification

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod

    1993-01-01

    The verification of LAWS beam alignment in space can be achieved by a measurement of heterodyne efficiency using the surface return. The crucial element is a direct detection signal that can be identified for each surface return. This should be satisfied for LAWS but will not be satisfied for descoped LAWS. The performance of algorithms for velocity estimation can be described with two basic parameters: the number of coherently detected photo-electrons per estimate and the number of independent signal samples per estimate. The average error of spectral domain velocity estimation algorithms are bounded by a new periodogram Cramer-Rao Bound. Comparison of the periodogram CRB with the exact CRB indicates a factor of two improvement in velocity accuracy is possible using non-spectral domain estimators. This improvement has been demonstrated with a maximum-likelihood estimator. The comparison of velocity estimation algorithms for 2 and 10 micron coherent lidar was performed by assuming all the system design parameters are fixed and the signal statistics are dominated by a 1 m/s rms wind fluctuation over the range gate. The beam alignment requirements for 2 micron are much more severe than for a 10 micron lidar. The effects of the random backscattered field on estimating the alignment error is a major problem for space based lidar operation, especially if the heterodyne efficiency cannot be estimated. For LAWS, the biggest science payoff would result from a short transmitted pulse, on the order of 0.5 microseconds instead of 3 microseconds. The numerically errors for simulation of laser propagation in the atmosphere have been determined as a joint project with the University of California, San Diego. Useful scaling laws were obtained for Kolmogorov atmospheric refractive turbulence and an atmospheric refractive turbulence characterized with an inner scale. This permits verification of the simulation procedure which is essential for the evaluation of the effects of refractive turbulence on coherent Doppler lidar systems. The analysis of 2 micron Doppler lidar data from Coherent Technologies, Inc. (CTI) has demonstrated many of the advantages of doppler lidar measurements of boundary layer winds. The effects of wind shear and wind turbulence over the pulse volume are probably the dominant source of the reduced performance. The effects of wind shear and wind turbulence on the statistical description of doppler lidar data has been derived and calculated.

  16. The impact of drought on ozone dry deposition over eastern Texas

    NASA Astrophysics Data System (ADS)

    Huang, Ling; McDonald-Buller, Elena C.; McGaughey, Gary; Kimura, Yosuke; Allen, David T.

    2016-02-01

    Dry deposition represents a critical pathway through which ground-level ozone is removed from the atmosphere. Understanding the effects of drought on ozone dry deposition is essential for air quality modeling and management in regions of the world with recurring droughts. This work applied the widely used Zhang dry deposition algorithm to examine seasonal and interannual changes in estimated ozone dry deposition velocities and component resistances/conductances over eastern Texas during years with drought (2006 and 2011) as well as a year with slightly cooler temperatures and above average rainfall (2007). Simulated area-averaged daytime ozone dry deposition velocities ranged between 0.26 and 0.47 cm/s. Seasonal patterns reflected the combined seasonal variations in non-stomatal and stomatal deposition pathways. Daytime ozone dry deposition velocities during the growing season were consistently larger during 2007 compared to 2006 and 2011. These differences were associated with differences in stomatal conductances and were most pronounced in forested areas. Reductions in stomatal conductances under drought conditions were highly sensitive to increases in vapor pressure deficit and warmer temperatures in Zhang's algorithm. Reductions in daytime ozone deposition velocities and deposition mass during drought years were associated with estimates of higher surface ozone concentrations.

  17. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  18. Automated assessment of noninvasive filling pressure using color Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Firstenberg, M. S.; Cardon, L. A.; Zuckerman, J.; Levine, B. D.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Assessment of left ventricular filling pressure usually requires invasive hemodynamic monitoring to follow the progression of disease or the response to therapy. Previous investigations have shown accurate estimation of wedge pressure using noninvasive Doppler information obtained from the ratio of the wave propagation slope from color M-mode (CMM) images and the peak early diastolic filling velocity from transmitral Doppler images. This study reports an automated algorithm that derives an estimate of wedge pressure based on the spatiotemporal velocity distribution available from digital CMM Doppler images of LV filling.

  19. Seismic velocity estimation from time migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, Maria Kourkina

    2007-01-01

    This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of themore » Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the earth becomes horizontally nonconstant. Even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. Conversely, depth migration provides the potential for more accurate reconstructions, since it can handle significant lateral variations. However, this approach requires good input data, known as a 'velocity model'. We address the problem of estimating seismic velocities inside the earth, i.e., the problem of constructing a velocity model, which is necessary for obtaining seismic images in regular Cartesian coordinates. The main goals are to develop algorithms to convert time-migration velocities to true seismic velocities, and to convert time-migrated images to depth images in regular Cartesian coordinates. Our main results are three-fold. First, we establish a theoretical relation between the true seismic velocities and the 'time migration velocities' using the paraxial ray tracing. Second, we formulate an appropriate inverse problem describing the relation between time migration velocities and depth velocities, and show that this problem is mathematically ill-posed, i.e., unstable to small perturbations. Third, we develop numerical algorithms to solve regularized versions of these equations which can be used to recover smoothed velocity variations. Our algorithms consist of efficient time-to-depth conversion algorithms, based on Dijkstra-like Fast Marching Methods, as well as level set and ray tracing algorithms for transforming Dix velocities into seismic velocities. Our algorithms are applied to both two-dimensional and three-dimensional problems, and we test them on a collection of both synthetic examples and field data.« less

  20. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  1. Elbow joint angle and elbow movement velocity estimation using NARX-multiple layer perceptron neural network model with surface EMG time domain parameters.

    PubMed

    Raj, Retheep; Sivanandan, K S

    2017-01-01

    Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.

  2. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    NASA Astrophysics Data System (ADS)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  3. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.

  4. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  5. Shear wave velocity imaging using transient electrode perturbation: phantom and ex vivo validation.

    PubMed

    DeWall, Ryan J; Varghese, Tomy; Madsen, Ernest L

    2011-03-01

    This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures.

  6. A modified beam-to-earth transformation to measure short-wavelength internal waves with an acoustic Doppler current profiler

    USGS Publications Warehouse

    Scotti, A.; Butman, B.; Beardsley, R.C.; Alexander, P.S.; Anderson, S.

    2005-01-01

    The algorithm used to transform velocity signals from beam coordinates to earth coordinates in an acoustic Doppler current profiler (ADCP) relies on the assumption that the currents are uniform over the horizontal distance separating the beams. This condition may be violated by (nonlinear) internal waves, which can have wavelengths as small as 100-200 m. In this case, the standard algorithm combines velocities measured at different phases of a wave and produces horizontal velocities that increasingly differ from true velocities with distance from the ADCP. Observations made in Massachusetts Bay show that currents measured with a bottom-mounted upward-looking ADCP during periods when short-wavelength internal waves are present differ significantly from currents measured by point current meters, except very close to the instrument. These periods are flagged with high error velocities by the standard ADCP algorithm. In this paper measurements from the four spatially diverging beams and the backscatter intensity signal are used to calculate the propagation direction and celerity of the internal waves. Once this information is known, a modified beam-to-earth transformation that combines appropriately lagged beam measurements can be used to obtain current estimates in earth coordinates that compare well with pointwise measurements. ?? 2005 American Meteorological Society.

  7. Nonholonomic Closed-loop Velocity Control of a Soft-tethered Magnetic Capsule Endoscope.

    PubMed

    Taddese, Addisu Z; Slawinski, Piotr R; Obstein, Keith L; Valdastri, Pietro

    2016-10-01

    In this paper, we demonstrate velocity-level closed-loop control of a tethered magnetic capsule endoscope that is actuated via serial manipulator with a permanent magnet at its end-effector. Closed-loop control (2 degrees-of-freedom in position, and 2 in orientation) is made possible with the use of a real-time magnetic localization algorithm that utilizes the actuating magnetic field and thus does not require additional hardware. Velocity control is implemented to create smooth motion that is clinically necessary for colorectal cancer diagnostics. Our control algorithm generates a spline that passes through a set of input points that roughly defines the shape of the desired trajectory. The velocity controller acts in the tangential direction to the path, while a secondary position controller enforces a nonholonomic constraint on capsule motion. A soft nonholonomic constraint is naturally imposed by the lumen while we enforce a strict constraint for both more accurate estimation of tether disturbance and hypothesized intuitiveness for a clinician's teleoperation. An integrating disturbance force estimation control term is introduced to predict the disturbance of the tether. This paper presents the theoretical formulations and experimental validation of our methodology. Results show the system's ability to achieve a repeatable velocity step response with low steady-state error as well as ability of the tethered capsule to maneuver around a bend.

  8. An assessment of a new settling velocity parameterisation for cohesive sediment transport modeling

    NASA Astrophysics Data System (ADS)

    Baugh, John V.; Manning, Andrew J.

    2007-07-01

    An important element within the Defra funded Estuary Process Research project "EstProc" was the implementation of the new or refined algorithms, produced under EstProc, into cohesive sediment numerical models. The implementation stage was important as any extension in the understanding of estuarine processes from EstProc was required to be suitable for dissemination into the wider research community with a level of robustness for general applications demonstrated. This report describes work undertaken to implement the new Manning Floc Settling Velocity Model, developed during EstProc. All Manning component algorithms could be combined to provide estimates of mass settling flux. The algorithms are initially assessed in a number of 1-D scenarios, where the Manning model output is compared against both real observations and the output from alternative settling parameterisations. The Manning model is then implemented into a fully 3-D computational model (TELEMAC3D) of estuarine hydraulics and sediment transport of the Lower Thames estuary. The 3-D model results with the Manning algorithm included were compared to runs with a constant settling velocity of 0.5 mm s -1 and settling velocity based on a simple linear multiplier of concentration and with the above mentioned observations of suspended concentration. The findings of the 1-D case studies found the Manning empirical settling model could reproduce 93% of the total mass settling flux observed over a spring tidal cycle. The floc model fit was even better within the turbidity maximum (TM) zone. A constant 0.5 mm s -1 only estimated 15% of the TM mass flux, whereas the fixed 5 mm s -1 settling rate over-predicted the TM mass flux by 47%. Both settling velocity as a simple linear function of concentration, and van Leussen's method, did not fare much better estimating less than half the observed flux during the various tidal and sub-tidal cycle periods. When the Manning-settling model was applied to a layer with suspended concentrations approaching 6 g l -1, it calculated 96% of the observed mass flux. The main conclusions of the implementation exercise were that it was feasible to implement a complex relationship between settling velocity and concentration in a 3-D computational model of estuarine hydraulics, without producing any significant increase in model run times or reducing model stability. The use of the Manning algorithm greatly improved the reproduction of the observed distribution of suspended concentration both in the vertical and horizontal directions compared to the other simulations. During the 1-D assessments, the Manning-settling model demonstrated flexibility in adapting to a wide range of estuarine environmental conditions (i.e. shear stress and concentration), specifically for applied modelling purposes.

  9. Estimation of fast and slow wave properties in cancellous bone using Prony's method and curve fitting.

    PubMed

    Wear, Keith A

    2013-04-01

    The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.

  10. Bayesian seismic tomography by parallel interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas

    2014-05-01

    The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.

  11. Ship heading and velocity analysis by wake detection in SAR images

    NASA Astrophysics Data System (ADS)

    Graziano, Maria Daniela; D'Errico, Marco; Rufino, Giancarlo

    2016-11-01

    With the aim of ship-route estimation, a wake detection method is developed and applied to COSMO/SkyMed and TerraSAR-X Stripmap SAR images over the Gulf of Naples, Italy. In order to mitigate the intrinsic limitations of the threshold logic, the algorithm identifies the wake features according to the hydrodynamic theory. A post-detection validation phase is performed to classify the features as real wake structures by means of merit indexes defined in the intensity domain. After wake reconstruction, ship heading is evaluated on the basis of turbulent wake direction and ship velocity is estimated by both techniques of azimuth shift and Kelvin pattern wavelength. The method is tested over 34 ship wakes identified by visual inspection in both HH and VV images at different incidence angles. For all wakes, no missed detections are reported and at least the turbulent and one narrow-V wakes are correctly identified, with ship heading successfully estimated. Also, the azimuth shift method is applied to estimate velocity for the 10 ships having route with sufficient angular separation from the satellite ground track. In one case ship velocity is successfully estimated with both methods, showing agreement within 14%.

  12. Using virtual environment for autonomous vehicle algorithm validation

    NASA Astrophysics Data System (ADS)

    Levinskis, Aleksandrs

    2018-04-01

    This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.

  13. Heli/SITAN: A Terrain Referenced Navigation algorithm for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollowell, J.

    1990-01-01

    Heli/SITAN is a Terrain Referenced Navigation (TRN) algorithm that utilizes radar altimeter ground clearance measurements in combination with a conventional navigation system and a stored digital terrain elevation map to accurately estimate a helicopter's position. Multiple Model Adaptive Estimation (MMAE) techniques are employed using a bank of single state Kalman filters to ensure that reliable position estimates are obtained even in the face of large initial position errors. A real-time implementation of the algorithm was tested aboard a US Army UH-1 helicopter equipped with a Singer-Kearfott Doppler Velocity Sensor (DVS) and a Litton LR-80 strapdown Attitude and Heading Reference Systemmore » (AHRS). The median radial error of the position fixes provided in real-time by this implementation was less than 50 m for a variety of mission profiles. 6 refs., 7 figs.« less

  14. 3-D FDTD simulation of shear waves for evaluation of complex modulus imaging.

    PubMed

    Orescanin, Marko; Wang, Yue; Insana, Michael

    2011-02-01

    The Navier equation describing shear wave propagation in 3-D viscoelastic media is solved numerically with a finite differences time domain (FDTD) method. Solutions are formed in terms of transverse scatterer velocity waves and then verified via comparison to measured wave fields in heterogeneous hydrogel phantoms. The numerical algorithm is used as a tool to study the effects on complex shear modulus estimation from wave propagation in heterogeneous viscoelastic media. We used an algebraic Helmholtz inversion (AHI) technique to solve for the complex shear modulus from simulated and experimental velocity data acquired in 2-D and 3-D. Although 3-D velocity estimates are required in general, there are object geometries for which 2-D inversions provide accurate estimations of the material properties. Through simulations and experiments, we explored artifacts generated in elastic and dynamic-viscous shear modulus images related to the shear wavelength and average viscosity.

  15. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  16. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  17. Evolution of semilocal string networks. II. Velocity estimators

    NASA Astrophysics Data System (ADS)

    Lopez-Eiguren, A.; Urrestilla, J.; Achúcarro, A.; Avgoustidis, A.; Martins, C. J. A. P.

    2017-07-01

    We continue a comprehensive numerical study of semilocal string networks and their cosmological evolution. These can be thought of as hybrid networks comprised of (nontopological) string segments, whose core structure is similar to that of Abelian Higgs vortices, and whose ends have long-range interactions and behavior similar to that of global monopoles. Our study provides further evidence of a linear scaling regime, already reported in previous studies, for the typical length scale and velocity of the network. We introduce a new algorithm to identify the position of the segment cores. This allows us to determine the length and velocity of each individual segment and follow their evolution in time. We study the statistical distribution of segment lengths and velocities for radiation- and matter-dominated evolution in the regime where the strings are stable. Our segment detection algorithm gives higher length values than previous studies based on indirect detection methods. The statistical distribution shows no evidence of (anti)correlation between the speed and the length of the segments.

  18. State-Estimation Algorithm Based on Computer Vision

    NASA Technical Reports Server (NTRS)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  19. Measurement of fluid rotation, dilation, and displacement in particle image velocimetry using a Fourier–Mellin cross-correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giarra, Matthew N.; Charonko, John J.; Vlachos, Pavlos P.

    Traditional particle image velocimetry (PIV) uses discrete Cartesian cross correlations (CCs) to estimate the displacements of groups of tracer particles within small subregions of sequentially captured images. However, these CCs fail in regions with large velocity gradients or high rates of rotation. In this paper, we propose a new PIV correlation method based on the Fourier–Mellin transformation (FMT) that enables direct measurement of the rotation and dilation of particle image patterns. In previously unresolvable regions of large rotation, our algorithm significantly improves the velocity estimates compared to traditional correlations by aligning the rotated and stretched particle patterns prior to performingmore » Cartesian correlations to estimate their displacements. Furthermore, our algorithm, which we term Fourier–Mellin correlation (FMC), reliably measures particle pattern displacement between pairs of interrogation regions with up to ±180° of angular misalignment, compared to 6–8° for traditional correlations, and dilation/compression factors of 0.5–2.0, compared to 0.9–1.1 for a single iteration of traditional correlations.« less

  20. Measurement of fluid rotation, dilation, and displacement in particle image velocimetry using a Fourier–Mellin cross-correlation

    DOE PAGES

    Giarra, Matthew N.; Charonko, John J.; Vlachos, Pavlos P.

    2015-02-05

    Traditional particle image velocimetry (PIV) uses discrete Cartesian cross correlations (CCs) to estimate the displacements of groups of tracer particles within small subregions of sequentially captured images. However, these CCs fail in regions with large velocity gradients or high rates of rotation. In this paper, we propose a new PIV correlation method based on the Fourier–Mellin transformation (FMT) that enables direct measurement of the rotation and dilation of particle image patterns. In previously unresolvable regions of large rotation, our algorithm significantly improves the velocity estimates compared to traditional correlations by aligning the rotated and stretched particle patterns prior to performingmore » Cartesian correlations to estimate their displacements. Furthermore, our algorithm, which we term Fourier–Mellin correlation (FMC), reliably measures particle pattern displacement between pairs of interrogation regions with up to ±180° of angular misalignment, compared to 6–8° for traditional correlations, and dilation/compression factors of 0.5–2.0, compared to 0.9–1.1 for a single iteration of traditional correlations.« less

  1. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  2. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    PubMed

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  3. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    PubMed Central

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  4. Shear Wave Velocity Imaging Using Transient Electrode Perturbation: Phantom and ex vivo Validation

    PubMed Central

    Varghese, Tomy; Madsen, Ernest L.

    2011-01-01

    This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures. PMID:21075719

  5. GPS vertical axis performance enhancement for helicopter precision landing approach

    NASA Technical Reports Server (NTRS)

    Denaro, Robert P.; Beser, Jacques

    1986-01-01

    Several areas were investigated for improving vertical accuracy for a rotorcraft using the differential Global Positioning System (GPS) during a landing approach. Continuous deltaranging was studied and the potential improvement achieved by estimating acceleration was studied by comparing the performance on a constant acceleration turn and a rough landing profile of several filters: a position-velocity (PV) filter, a position-velocity-constant acceleration (PVAC) filter, and a position-velocity-turning acceleration (PVAT) filter. In overall statistics, the PVAC filter was found to be most efficient with the more complex PVAT performing equally well. Vertical performance was not significantly different among the filters. Satellite selection algorithms based on vertical errors only (vertical dilution of precision or VDOP) and even-weighted cross-track and vertical errors (XVDOP) were tested. The inclusion of an altimeter was studied by modifying the PVAC filter to include a baro bias estimate. Improved vertical accuracy during degraded DOP conditions resulted. Flight test results for raw differential results excluding filter effects indicated that the differential performance significantly improved overall navigation accuracy. A landing glidepath steering algorithm was devised which exploits the flexibility of GPS in determining precise relative position. A method for propagating the steering command over the GPS update interval was implemented.

  6. Feasibility of pulse wave velocity estimation from low frame rate US sequences in vivo

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; Bruce, Matthew; Hippke, Michelle; Schwartz, Alan; O'Donnell, Matthew

    2017-03-01

    The pulse wave velocity (PWV) is considered one of the most important clinical parameters to evaluate CV risk, vascular adaptation, etc. There has been substantial work attempting to measure the PWV in peripheral vessels using ultrasound (US). This paper presents a fully automatic algorithm for PWV estimation from the human carotid using US sequences acquired with a Logic E9 scanner (modified for RF data capture) and a 9L probe. Our algorithm samples the pressure wave in time by tracking wall displacements over the sequence, and estimates the PWV by calculating the temporal shift between two sampled waves at two distinct locations. Several recent studies have utilized similar ideas along with speckle tracking tools and high frame rate (above 1 KHz) sequences to estimate the PWV. To explore PWV estimation in a more typical clinical setting, we used focused-beam scanning, which yields relatively low frame rates and small fields of view (e.g., 200 Hz for 16.7 mm filed of view). For our application, a 200 Hz frame rate is low. In particular, the sub-frame temporal accuracy required for PWV estimation between locations 16.7 mm apart, ranges from 0.82 of a frame for 4m/s, to 0.33 for 10m/s. When the distance is further reduced (to 0.28 mm between two beams), the sub-frame precision is in parts per thousand (ppt) of the frame (5 ppt for 10m/s). As such, the contributions of our algorithm and this paper are: 1. Ability to work with low frame-rate ( 200Hz) and decreased lateral field of view. 2. Fully automatic segmentation of the wall intima (using raw RF images). 3. Collaborative Speckle Tracking of 2D axial and lateral carotid wall motion. 4. Outlier robust PWV calculation from multiple votes using RANSAC. 5. Algorithm evaluation on volunteers of different ages and health conditions.

  7. Estimating Thruster Impulses From IMU and Doppler Data

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.; Kruizinga, Gerhard L.

    2009-01-01

    A computer program implements a thrust impulse measurement (TIM) filter, which processes data on changes in velocity and attitude of a spacecraft to estimate the small impulsive forces and torques exerted by the thrusters of the spacecraft reaction control system (RCS). The velocity-change data are obtained from line-of-sight-velocity data from Doppler measurements made from the Earth. The attitude-change data are the telemetered from an inertial measurement unit (IMU) aboard the spacecraft. The TIM filter estimates the threeaxis thrust vector for each RCS thruster, thereby enabling reduction of cumulative navigation error attributable to inaccurate prediction of thrust vectors. The filter has been augmented with a simple mathematical model to compensate for large temperature fluctuations in the spacecraft thruster catalyst bed in order to estimate thrust more accurately at deadbanding cold-firing levels. Also, rigorous consider-covariance estimation is applied in the TIM to account for the expected uncertainty in the moment of inertia and the location of the center of gravity of the spacecraft. The TIM filter was built with, and depends upon, a sigma-point consider-filter algorithm implemented in a Python-language computer program.

  8. Lagrangian water mass tracing from pseudo-Argo, model-derived salinity, tracer and velocity data: An application to Antarctic Intermediate Water in the South Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Blanke, Bruno; Speich, Sabrina; Rusciano, Emanuela

    2015-01-01

    We use the tracer and velocity fields of a climatological ocean model to investigate the ability of Argo-like data to estimate accurately water mass movements and transformations, in the style of analyses commonly applied to the output of ocean general circulation model. To this end, we introduce an algorithm for the reconstruction of a fully non-divergent three-dimensional velocity field from the simple knowledge of the model vertical density profiles and 1000-m horizontal velocity components. The validation of the technique consists in comparing the resulting pathways for Antarctic Intermediate Water in the South Atlantic Ocean to equivalent reference results based on the full model information available for velocity and tracers. We show that the inclusion of a wind-induced Ekman pumping and of a well-thought-out expression for vertical velocity at the level of the intermediate waters is essential for the reliable reproduction of quantitative Lagrangian analyses. Neglecting the seasonal variability of the velocity and tracer fields is not a significant source of errors, at least well below the permanent thermocline. These results give us confidence in the success of the adaptation of the algorithm to true gridded Argo data for investigating the dynamics of flows in the ocean interior.

  9. Predictability of the Lagrangian Motion in the Upper Ocean

    NASA Astrophysics Data System (ADS)

    Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.

    2001-12-01

    The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.

  10. Development of a Closed-Loop Strap Down Attitude System for an Ultrahigh Altitude Flight Experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Fife, Mike; Brashear, Logan

    1997-01-01

    A low-cost attitude system has been developed for an ultrahigh altitude flight experiment. The experiment uses a remotely piloted sailplane, with the wings modified for flight at altitudes greater than 100,000 ft. Mission requirements deem it necessary to measure the aircraft pitch and bank angles with accuracy better than 1.0 deg and heading with accuracy better than 5.0 deg. Vehicle cost restrictions and gross weight limits make installing a commercial inertial navigation system unfeasible. Instead, a low-cost attitude system was developed using strap down components. Monte Carlo analyses verified that two vector measurements, magnetic field and velocity, are required to completely stabilize the error equations. In the estimating algorithm, body-axis observations of the airspeed vector and the magnetic field are compared against the inertial velocity vector and a magnetic-field reference model. Residuals are fed back to stabilize integration of rate gyros. The effectiveness of the estimating algorithm was demonstrated using data from the NASA Dryden Flight Research Center Systems Research Aircraft (SRA) flight tests. The algorithm was applied with good results to a maximum 10' pitch and bank angles. Effects of wind shears were evaluated and, for most cases, can be safely ignored.

  11. A random-walk algorithm for modeling lithospheric density and the role of body forces in the evolution of the Midcontinent Rift

    USGS Publications Warehouse

    Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.

    2015-01-01

    We test this algorithm on the Proterozoic Midcontinent Rift (MCR), north-central U.S. The MCR provides a challenge because it hosts a gravity high overlying low shear-wave velocity crust in a generally flat region. Our initial density estimates are derived from a seismic velocity/crustal thickness model based on joint inversion of surface-wave dispersion and receiver functions. By adjusting these estimates to reproduce gravity and topography, we generate a lithospheric-scale model that reveals dense middle crust and eclogitized lowermost crust within the rift. Mantle lithospheric density beneath the MCR is not anomalous, consistent with geochemical evidence that lithospheric mantle was not the primary source of rift-related magmas and suggesting that extension occurred in response to far-field stress rather than a hot mantle plume. Similarly, the subsequent inversion of normal faults resulted from changing far-field stress that exploited not only warm, recently faulted crust but also a gravitational potential energy low in the MCR. The success of this density modeling algorithm in the face of such apparently contradictory geophysical properties suggests that it may be applicable to a variety of tectonic and geodynamic problems. 

  12. Impulse excitation scanning acoustic microscopy for local quantification of Rayleigh surface wave velocity using B-scan analysis

    NASA Astrophysics Data System (ADS)

    Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.

    2018-01-01

    A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.

  13. Precession feature extraction of ballistic missile warhead with high velocity

    NASA Astrophysics Data System (ADS)

    Sun, Huixia

    2018-04-01

    This paper establishes the precession model of ballistic missile warhead, and derives the formulas of micro-Doppler frequency induced by the target with precession. In order to obtain micro-Doppler feature of ballistic missile warhead with precession, micro-Doppler bandwidth estimation algorithm, which avoids velocity compensation, is presented based on high-resolution time-frequency transform. The results of computer simulations confirm the effectiveness of the proposed method even with low signal-to-noise ratio.

  14. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-10-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.

  15. Spacecraft alignment estimation. [for onboard sensors

    NASA Technical Reports Server (NTRS)

    Shuster, Malcolm D.; Bierman, Gerald J.

    1988-01-01

    A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter.

  16. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  17. Artificial Intelligence Estimation of Carotid-Femoral Pulse Wave Velocity using Carotid Waveform.

    PubMed

    Tavallali, Peyman; Razavi, Marianne; Pahlevan, Niema M

    2018-01-17

    In this article, we offer an artificial intelligence method to estimate the carotid-femoral Pulse Wave Velocity (PWV) non-invasively from one uncalibrated carotid waveform measured by tonometry and few routine clinical variables. Since the signal processing inputs to this machine learning algorithm are sensor agnostic, the presented method can accompany any medical instrument that provides a calibrated or uncalibrated carotid pressure waveform. Our results show that, for an unseen hold back test set population in the age range of 20 to 69, our model can estimate PWV with a Root-Mean-Square Error (RMSE) of 1.12 m/sec compared to the reference method. The results convey the fact that this model is a reliable surrogate of PWV. Our study also showed that estimated PWV was significantly associated with an increased risk of CVDs.

  18. A concept for a fuel efficient flight planning aid for general aviation

    NASA Technical Reports Server (NTRS)

    Collins, B. P.; Haines, A. L.; Wales, C. J.

    1982-01-01

    A core equation for estimation of fuel burn from path profile data was developed. This equation was used as a necessary ingredient in a dynamic program to define a fuel efficient flight path. The resultant algorithm is oriented toward use by general aviation. The pilot provides a description of the desired ground track, standard aircraft parameters, and weather at selected waypoints. The algorithm then derives the fuel efficient altitudes and velocities at the waypoints.

  19. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less

  20. Satellite Angular Rate Estimation From Vector Measurements

    NASA Technical Reports Server (NTRS)

    Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.

  1. A Muscle Fibre Conduction Velocity Tracking ASIC for Local Fatigue Monitoring.

    PubMed

    Koutsos, Ermis; Cretu, Vlad; Georgiou, Pantelis

    2016-12-01

    Electromyography analysis can provide information about a muscle's fatigue state by estimating Muscle Fibre Conduction Velocity (MFCV), a measure of the travelling speed of Motor Unit Action Potentials (MUAPs) in muscle tissue. MFCV better represents the physical manifestations of muscle fatigue, compared to the progressive compression of the myoelectic Power Spectral Density, hence it is more suitable for a muscle fatigue tracking system. This paper presents a novel algorithm for the estimation of MFCV using single threshold bit-stream conversion and a dedicated application-specified integrated circuit (ASIC) for its implementation, suitable for a compact, wearable and easy to use muscle fatigue monitor. The presented ASIC is implemented in a commercially available AMS 0.35 [Formula: see text] CMOS technology and utilizes a bit-stream cross-correlator that estimates the conduction velocity of the myoelectric signal in real time. A test group of 20 subjects was used to evaluate the performance of the developed ASIC, achieving good accuracy with an error of only 3.2% compared to Matlab.

  2. Analytical estimates of the PP-algorithm at low number of Doppler periods per pulse length

    NASA Technical Reports Server (NTRS)

    Angelova, M. D.; Stoykova, E. V.; Stoyanov, D. V.

    1992-01-01

    When discussing the Doppler velocity estimators, it is of significant interest to analyze their behavior at a low number of Doppler periods n(sub D) = 2v(sub r)t(sub s)/lambda is approximately equal to 1 within the resolution cell t(sub s) (v(sub 4) is the radial velocity, lambda is the wavelength). Obviously, at n(sub D) is approximately less than 1 the velocity error is essentially increased. The problem of low n(sub D) arises in the planetary boundary layer (PBL), where higher resolutions are usually required but the signal-to-noise ratio (SNR) is relatively high. In this work analytical expression for the relative root mean square (RMS) error of the PP Doppler estimator at low number of periods for a narrowband Doppler signal and arbitrary model of the noise correlation function is obtained. The results are correct at relatively high SNR. The analysis is supported by computer simulations at various SNR's.

  3. Incompressible Deformation Estimation Algorithm (IDEA) from Tagged MR Images

    PubMed Central

    Liu, Xiaofeng; Abd-Elmoniem, Khaled Z.; Stone, Maureen; Murano, Emi Z.; Zhuo, Jiachen; Gullapalli, Rao P.; Prince, Jerry L.

    2013-01-01

    Measuring the three-dimensional motion of muscular tissues, e.g., the heart or the tongue, using magnetic resonance (MR) tagging is typically carried out by interpolating the two-dimensional motion information measured on orthogonal stacks of images. The incompressibility of muscle tissue is an important constraint on the reconstructed motion field and can significantly help to counter the sparsity and incompleteness of the available motion information. Previous methods utilizing this fact produced incompressible motions with limited accuracy. In this paper, we present an incompressible deformation estimation algorithm (IDEA) that reconstructs a dense representation of the three-dimensional displacement field from tagged MR images and the estimated motion field is incompressible to high precision. At each imaged time frame, the tagged images are first processed to determine components of the displacement vector at each pixel relative to the reference time. IDEA then applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, IDEA yields a dense estimate of a three-dimensional displacement field that matches our observations and also corresponds to an incompressible motion. The method was validated with both numerical simulation and in vivo human experiments on the heart and the tongue. PMID:21937342

  4. Human arm stiffness and equilibrium-point trajectory during multi-joint movement.

    PubMed

    Gomi, H; Kawato, M

    1997-03-01

    By using a newly designed high-performance manipulandum and a new estimation algorithm, we measured human multi-joint arm stiffness parameters during multi-joint point-to-point movements on a horizontal plane. This manipulandum allows us to apply a sufficient perturbation to subject's arm within a brief period during movement. Arm stiffness parameters were reliably estimated using a new algorithm, in which all unknown structural parameters could be estimated independent of arm posture (i.e., constant values under any arm posture). Arm stiffness during transverse movement was considerably greater than that during corresponding posture, but not during a longitudinal movement. Although the ratios of elbow, shoulder, and double-joint stiffness were varied in time, the orientation of stiffness ellipses during the movement did not change much. Equilibrium-point trajectories that were predicted from measured stiffness parameters and actual trajectories were slightly sinusoidally curved in Cartesian space and their velocity profiles were quite different from the velocity profiles of actual hand trajectories. This result contradicts the hypothesis that the brain does not take the dynamics into account in movement control depending on the neuromuscular servo mechanism; rather, it implies that the brain needs to acquire some internal models of controlled objects.

  5. Study of the mode of angular velocity damping for a spacecraft at non-standard situation

    NASA Astrophysics Data System (ADS)

    Davydov, A. A.; Sazonov, V. V.

    2012-07-01

    Non-standard situation on a spacecraft (Earth's satellite) is considered, when there are no measurements of the spacecraft's angular velocity component relative to one of its body axes. Angular velocity measurements are used in controlling spacecraft's attitude motion by means of flywheels. The arising problem is to study the operation of standard control algorithms in the absence of some necessary measurements. In this work this problem is solved for the algorithm ensuring the damping of spacecraft's angular velocity. Such a damping is shown to be possible not for all initial conditions of motion. In the general case one of two possible final modes is realized, each described by stable steady-state solutions of the equations of motion. In one of them, the spacecraft's angular velocity component relative to the axis, for which the measurements are absent, is nonzero. The estimates of the regions of attraction are obtained for these steady-state solutions by numerical calculations. A simple technique is suggested that allows one to eliminate the initial conditions of the angular velocity damping mode from the attraction region of an undesirable solution. Several realizations of this mode that have taken place are reconstructed. This reconstruction was carried out using approximations of telemetry values of the angular velocity components and the total angular momentum of flywheels, obtained at the non-standard situation, by solutions of the equations of spacecraft's rotational motion.

  6. MODELING FLUX PATHWAYS TO VEGETATION FOR VOLATILE AND SEMI-VOLATILE ORGANIC COMPOUNDS IN A MULTIMEDIA ENVIRONMENT

    EPA Science Inventory

    This study evaluates the treatment of gas-phase atmospheric deposition in a screening level model of the multimedia environmental distribution of toxics (MEND-TOX). Recent algorithmic additions to MEND-TOX for the estimation of gas-phase deposition velocity over vegetated surf...

  7. Characterization of Moving Dust Particles

    NASA Technical Reports Server (NTRS)

    Bos, Brent J.; Antonille, Scott R.; Memarsadeghi, Nargess

    2010-01-01

    A large depth-of-field Particle Image Velocimeter (PIV) has been developed at NASA GSFC to characterize dynamic dust environments on planetary surfaces. This instrument detects and senses lofted dust particles. We have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and reduces the image information down to only the particle measurement data we are interested in receiving on the ground - typically reducing the amount of data to be handled by more than two orders of magnitude. We give a general description of PIV algorithms and describe only the algorithm for estimating the velocity of the traveling particles.

  8. An enhanced inertial navigation system based on a low-cost IMU and laser scanner

    NASA Astrophysics Data System (ADS)

    Kim, Hyung-Soon; Baeg, Seung-Ho; Yang, Kwang-Woong; Cho, Kuk; Park, Sangdeok

    2012-06-01

    This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments: darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle. We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range Finder (LRF). The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.

  9. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  10. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors

    PubMed Central

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-01-01

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684

  11. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    PubMed

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  12. A comparison of methods to estimate seismic phase delays--Numerical examples for coda wave interferometry

    USGS Publications Warehouse

    Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.

    2015-01-01

    Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).

  13. A processing work-flow for measuring erythrocytes velocity in extended vascular networks from wide field high-resolution optical imaging data.

    PubMed

    Deneux, Thomas; Takerkart, Sylvain; Grinvald, Amiram; Masson, Guillaume S; Vanzetta, Ivo

    2012-02-01

    Comprehensive information on the spatio-temporal dynamics of the vascular response is needed to underpin the signals used in hemodynamics-based functional imaging. It has recently been shown that red blood cells (RBCs) velocity and its changes can be extracted from wide-field optical imaging recordings of intrinsic absorption changes in cortex. Here, we describe a complete processing work-flow for reliable RBC velocity estimation in cortical networks. Several pre-processing steps are implemented: image co-registration, necessary to correct for small movements of the vasculature, semi-automatic image segmentation for fast and reproducible vessel selection, reconstruction of RBC trajectories patterns for each micro-vessel, and spatio-temporal filtering to enhance the desired data characteristics. The main analysis step is composed of two robust algorithms for estimating the RBCs' velocity field. Vessel diameter and its changes are also estimated, as well as local changes in backscattered light intensity. This full processing chain is implemented with a software suite that is freely distributed. The software uses efficient data management for handling the very large data sets obtained with in vivo optical imaging. It offers a complete and user-friendly graphical user interface with visualization tools for displaying and exploring data and results. A full data simulation framework is also provided in order to optimize the performances of the algorithm with respect to several characteristics of the data. We illustrate the performance of our method in three different cases of in vivo data. We first document the massive RBC speed response evoked by a spreading depression in anesthetized rat somato-sensory cortex. Second, we show the velocity response elicited by a visual stimulation in anesthetized cat visual cortex. Finally, we report, for the first time, visually-evoked RBC speed responses in an extended vascular network in awake monkey extrastriate cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  15. Phase Helps Find Geometrically Optimal Gaits

    NASA Astrophysics Data System (ADS)

    Revzen, Shai; Hatton, Ross

    Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.

  16. Stochastic inversion of cross-borehole radar data from metalliferous vein detection

    NASA Astrophysics Data System (ADS)

    Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui

    2017-12-01

    In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.

  17. Rapid determination of particle velocity from space-time images using the Radon transform

    PubMed Central

    Drew, Patrick J.; Blinder, Pablo; Cauwenberghs, Gert; Shih, Andy Y.; Kleinfeld, David

    2016-01-01

    Laser-scanning methods are a means to observe streaming particles, such as the flow of red blood cells in a blood vessel. Typically, particle velocity is extracted from images formed from cyclically repeated line-scan data that is obtained along the center-line of the vessel; motion leads to streaks whose angle is a function of the velocity. Past methods made use of shearing or rotation of the images and a Singular Value Decomposition (SVD) to automatically estimate the average velocity in a temporal window of data. Here we present an alternative method that makes use of the Radon transform to calculate the velocity of streaming particles. We show that this method is over an order of magnitude faster than the SVD-based algorithm and is more robust to noise. PMID:19459038

  18. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    USGS Publications Warehouse

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  19. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    NASA Technical Reports Server (NTRS)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  20. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  1. A method and implementation for incorporating heuristic knowledge into a state estimator through the use of a fuzzy model

    NASA Astrophysics Data System (ADS)

    Swanson, Steven Roy

    The objective of the dissertation is to improve state estimation performance, as compared to a Kalman filter, when non-constant, or changing, biases exist in the measurement data. The state estimation performance increase will come from the use of a fuzzy model to determine the position and velocity gains of a state estimator. A method is proposed for incorporating heuristic knowledge into a state estimator through the use of a fuzzy model. This method consists of using a fuzzy model to determine the gains of the state estimator, converting the heuristic knowledge into the fuzzy model, and then optimizing the fuzzy model with a genetic algorithm. This method is applied to the problem of state estimation of a cascaded global positioning system (GPS)/inertial reference unit (IRU) navigation system. The GPS position data contains two major sources for position bias. The first bias is due to satellite errors and the second is due to the time delay or lag from when the GPS position is calculated until it is used in the state estimator. When a change in the bias of the measurement data occurs, a state estimator will converge on the new measurement data solution. This will introduce errors into a Kalman filter's estimated state velocities, which in turn will cause a position overshoot as it converges. By using a fuzzy model to determine the gains of a state estimator, the velocity errors and their associated deficiencies can be reduced.

  2. Lagrangian analysis by clustering. An example in the Nordic Seas.

    NASA Astrophysics Data System (ADS)

    Koszalka, Inga; Lacasce, Joseph H.

    2010-05-01

    We propose a new method for obtaining average velocities and eddy diffusivities from Lagrangian data. Rather than grouping the drifter-derived velocities in uniform geographical bins, as is commonly done, we group a specified number of nearest-neighbor velocities. This is done via a clustering algorithm operating on the instantaneous positions of the drifters. Thus it is the data distribution itself which determines the positions of the averages and the areal extent of the clusters. A major advantage is that because the number of members is essentially the same for all clusters, the statistical accuracy is more uniform than with geographical bins. We illustrate the technique using synthetic data from a stochastic model, employing a realistic mean flow. The latter is an accurate representation of the surface currents in the Nordic Seas and is strongly inhomogeneous in space. We use the clustering algorithm to extract the mean velocities and diffusivities (both of which are known from the stochastic model). We also compare the results to those obtained with fixed geographical bins. Clustering is more successful at capturing spatial variability of the mean flow and also improves convergence in the eddy diffusivity estimates. We discuss both the future prospects and shortcomings of the new method.

  3. Nonlinear stability of traffic models and the use of Lyapunov vectors for estimating the traffic state

    NASA Astrophysics Data System (ADS)

    Palatella, Luigi; Trevisan, Anna; Rambaldi, Sandro

    2013-08-01

    Valuable information for estimating the traffic flow is obtained with current GPS technology by monitoring position and velocity of vehicles. In this paper, we present a proof of concept study that shows how the traffic state can be estimated using only partial and noisy data by assimilating them in a dynamical model. Our approach is based on a data assimilation algorithm, developed by the authors for chaotic geophysical models, designed to be equivalent but computationally much less demanding than the traditional extended Kalman filter. Here we show that the algorithm is even more efficient if the system is not chaotic and demonstrate by numerical experiments that an accurate reconstruction of the complete traffic state can be obtained at a very low computational cost by monitoring only a small percentage of vehicles.

  4. Fast beampattern evaluation by polynomial rooting

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  5. VeLoc: Finding Your Car in Indoor Parking Structures.

    PubMed

    Gao, Ruipeng; He, Fangpu; Li, Teng

    2018-05-02

    While WiFi-based indoor localization is attractive, there are many indoor places without WiFi coverage with a strong demand for localization capability. This paper describes a system and associated algorithms to address the indoor vehicle localization problem without the installation of additional infrastructure. In this paper, we propose VeLoc, which utilizes the sensor data of smartphones in the vehicle together with the floor map of the parking structure to track the vehicle in real time. VeLoc simultaneously harnesses constraints imposed by the map and environment sensing. All these cues are codified into a novel augmented particle filtering framework to estimate the position of the vehicle. Experimental results show that VeLoc performs well when even the initial position and the initial heading direction of the vehicle are completely unknown.

  6. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  7. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  8. Field assessment of alternative bed-load transport estimators

    USGS Publications Warehouse

    Gaeuman, G.; Jacobson, R.B.

    2007-01-01

    Measurement of near-bed sediment velocities with acoustic Doppler current profilers (ADCPs) is an emerging approach for quantifying bed-load sediment fluxes in rivers. Previous investigations of the technique have relied on conventional physical bed-load sampling to provide reference transport information with which to validate the ADCP measurements. However, physical samples are subject to substantial errors, especially under field conditions in which surrogate methods are most needed. Comparisons between ADCP bed velocity measurements with bed-load transport rates estimated from bed-form migration rates in the lower Missouri River show a strong correlation between the two surrogate measures over a wide range of mild to moderately intense sediment transporting conditions. The correlation between the ADCP measurements and physical bed-load samples is comparatively poor, suggesting that physical bed-load sampling is ineffective for ground-truthing alternative techniques in large sand-bed rivers. Bed velocities measured in this study became more variable with increasing bed-form wavelength at higher shear stresses. Under these conditions, bed-form dimensions greatly exceed the region of the bed ensonified by the ADCP, and the magnitude of the acoustic measurements depends on instrument location with respect to bed-form crests and troughs. Alternative algorithms for estimating bed-load transport from paired longitudinal profiles of bed topography were evaluated. An algorithm based on the routing of local erosion and deposition volumes that eliminates the need to identify individual bed forms was found to give results similar to those of more conventional dune-tracking methods. This method is particularly useful in cases where complex bed-form morphology makes delineation of individual bed forms difficult. ?? 2007 ASCE.

  9. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  10. A new method for ultrasound detection of interfacial position in gas-liquid two-phase flow.

    PubMed

    Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Neves, Flávio; Morales, Rigoberto E M

    2014-05-22

    Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe.

  11. A New Method for Ultrasound Detection of Interfacial Position in Gas-Liquid Two-Phase Flow

    PubMed Central

    Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Jr., Flávio Neves; Morales, Rigoberto E. M.

    2014-01-01

    Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe. PMID:24858961

  12. Coherent Lidar Design and Performance Verification

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod

    1996-01-01

    This final report summarizes the investigative results from the 3 complete years of funding and corresponding publications are listed. The first year saw the verification of beam alignment for coherent Doppler lidar in space by using the surface return. The second year saw the analysis and computerized simulation of using heterodyne efficiency as an absolute measure of performance of coherent Doppler lidar. A new method was proposed to determine the estimation error for Doppler lidar wind measurements without the need for an independent wind measurement. Coherent Doppler lidar signal covariance, including wind shear and turbulence, was derived and calculated for typical atmospheric conditions. The effects of wind turbulence defined by Kolmogorov spatial statistics were investigated theoretically and with simulations. The third year saw the performance of coherent Doppler lidar in the weak signal regime determined by computer simulations using the best velocity estimators. Improved algorithms for extracting the performance of velocity estimators with wind turbulence included were also produced.

  13. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  14. Seismic modeling of multidimensional heterogeneity scales of Mallik gas hydrate reservoirs, Northwest Territories of Canada

    NASA Astrophysics Data System (ADS)

    Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd

    2009-07-01

    In hydrate-bearing sediments, the velocity and attenuation of compressional and shear waves depend primarily on the spatial distribution of hydrates in the pore space of the subsurface lithologies. Recent characterizations of gas hydrate accumulations based on seismic velocity and attenuation generally assume homogeneous sedimentary layers and neglect effects from large- and small-scale heterogeneities of hydrate-bearing sediments. We present an algorithm, based on stochastic medium theory, to construct heterogeneous multivariable models that mimic heterogeneities of hydrate-bearing sediments at the level of detail provided by borehole logging data. Using this algorithm, we model some key petrophysical properties of gas hydrates within heterogeneous sediments near the Mallik well site, Northwest Territories, Canada. The modeled density, and P and S wave velocities used in combination with a modified Biot-Gassmann theory provide a first-order estimate of the in situ volume of gas hydrate near the Mallik 5L-38 borehole. Our results suggest a range of 528 to 768 × 106 m3/km2 of natural gas trapped within hydrates, nearly an order of magnitude lower than earlier estimates which did not include effects of small-scale heterogeneities. Further, the petrophysical models are combined with a 3-D finite difference modeling algorithm to study seismic attenuation due to scattering and leaky mode propagation. Simulations of a near-offset vertical seismic profile and cross-borehole numerical surveys demonstrate that attenuation of seismic energy may not be directly related to the intrinsic attenuation of hydrate-bearing sediments but, instead, may be largely attributed to scattering from small-scale heterogeneities and highly attenuate leaky mode propagation of seismic waves through larger-scale heterogeneities in sediments.

  15. Estimating the Instantaneous Drag-Wind Relationship for a Horizontally Homogeneous Canopy

    NASA Astrophysics Data System (ADS)

    Pan, Ying; Chamecki, Marcelo; Nepf, Heidi M.

    2016-07-01

    The mean drag-wind relationship is usually investigated assuming that field data are representative of spatially-averaged metrics of statistically stationary flow within and above a horizontally homogeneous canopy. Even if these conditions are satisfied, large-eddy simulation (LES) data suggest two major issues in the analysis of observational data. Firstly, the streamwise mean pressure gradient is usually neglected in the analysis of data from terrestrial canopies, which compromises the estimates of mean canopy drag and provides misleading information for the dependence of local mean drag coefficients on local velocity scales. Secondly, no standard approach has been proposed to investigate the instantaneous drag-wind relationship, a critical component of canopy representation in LES. Here, a practical approach is proposed to fit the streamwise mean pressure gradient using observed profiles of the mean vertical momentum flux within the canopy. Inclusion of the fitted mean pressure gradient enables reliable estimates of the mean drag-wind relationship. LES data show that a local mean drag coefficient that characterizes the relationship between mean canopy drag and the velocity scale associated with total kinetic energy can be used to identify the dependence of the local instantaneous drag coefficient on instantaneous velocity. Iterative approaches are proposed to fit specific models of velocity-dependent instantaneous drag coefficients that represent the effects of viscous drag and the reconfiguration of flexible canopy elements. LES data are used to verify the assumptions and algorithms employed by these new approaches. The relationship between mean canopy drag and mean velocity, which is needed in models based on the Reynolds-averaged Navier-Stokes equations, is parametrized to account for both the dependence on velocity and the contribution from velocity variances. Finally, velocity-dependent drag coefficients lead to significant variations of the calculated displacement height and roughness length with wind speed.

  16. Anisotropic S-wave velocity structure from joint inversion of surface wave group velocity dispersion: A case study from India

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.

    2016-12-01

    We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.

  17. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  18. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    NASA Astrophysics Data System (ADS)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  19. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1996-05-01

    The ability to efficiently fuse information of different forms for facilitating intelligent decision-making is one of the major capabilities of trained multilayer neural networks that is being recognized int eh recent times. While development of innovative adaptive control algorithms for nonlinear dynamical plants which attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. In this paper we describe the capabilities and functionality of neural network algorithms for data fusion and implementation of nonlinear tracking filters. For a discussion of details and for serving as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes form the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. Such an approach results in an overall nonlinear tracking filter which has several advantages over the popular efforts at designing nonlinear estimation algorithms for tracking applications, the principle one being the reduction of mathematical and computational complexities. A system architecture that efficiently integrates the processing capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described in this paper.

  20. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  1. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  2. Determination of elastic moduli from measured acoustic velocities.

    PubMed

    Brown, J Michael

    2018-06-01

    Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. An Integrated Processing Strategy for Mountain Glacier Motion Monitoring Based on SAR Images

    NASA Astrophysics Data System (ADS)

    Ruan, Z.; Yan, S.; Liu, G.; LV, M.

    2017-12-01

    Mountain glacier dynamic variables are important parameters in studies of environment and climate change in High Mountain Asia. Due to the increasing events of abnormal glacier-related hazards, research of monitoring glacier movements has attracted more interest during these years. Glacier velocities are sensitive and changing fast under complex conditions of high mountain regions, which implies that analysis of glacier dynamic changes requires comprehensive and frequent observations with relatively high accuracy. Synthetic aperture radar (SAR) has been successfully exploited to detect glacier motion in a number of previous studies, usually with pixel-tracking and interferometry methods. However, the traditional algorithms applied to mountain glacier regions are constrained by the complex terrain and diverse glacial motion types. Interferometry techniques are prone to fail in mountain glaciers because of their narrow size and the steep terrain, while pixel-tracking algorithm, which is more robust in high mountain areas, is subject to accuracy loss. In order to derive glacier velocities continually and efficiently, we propose a modified strategy to exploit SAR data information for mountain glaciers. In our approach, we integrate a set of algorithms for compensating non-glacial-motion-related signals which exist in the offset values retrieved by sub-pixel cross-correlation of SAR image pairs. We exploit modified elastic deformation model to remove the offsets associated with orbit and sensor attitude, and for the topographic residual offset we utilize a set of operations including DEM-assisted compensation algorithm and wavelet-based algorithm. At the last step of the flow, an integrated algorithm combining phase and intensity information of SAR images will be used to improve regional motion results failed in cross-correlation related processing. The proposed strategy is applied to the West Kunlun Mountain and Muztagh Ata region in western China using ALOS/PALSAR data. The results show that the strategy can effectively improve the accuracy of velocity estimation by reducing the mean and standard deviation values from 0.32 m and 0.4 m to 0.16 m. It is proved to be highly appropriate for monitoring glacier motion over a widely varying range of ice velocities with a relatively high accuracy.

  4. Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System

    NASA Technical Reports Server (NTRS)

    Karlgaard, Chris; Schoenenberger, Mark

    2017-01-01

    This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.

  5. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    PubMed Central

    Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz

    2017-01-01

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857

  6. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.

    PubMed

    Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz

    2017-05-19

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.

  7. On Target to Mars

    NASA Technical Reports Server (NTRS)

    Cheng, Yang

    2007-01-01

    This viewgraph presentation reviews the use of Descent Image Motion Estimation System (DIMES) for the descent of a spacecraft onto the surface of Mars. In the past this system was used to assist in the landing of the MER spacecraft. The overall algorithm is reviewed, and views of the hardware, and views from Spirit's descent are shown. On Spirit, had DIMES not been used, the impact velocity would have been at the limit of the airbag capability and Spirit may have bounced into Endurance Crater. By using DIMES, the velocity was reduced to well within the bounds of the airbag performance and Spirit arrived safely at Mars. Views from Oppurtunity's descent are also shown. The system to avoid and detect hazards is reviewed next. Landmark Based Spacecraft Pinpoint Landing is also reviewed. A cartoon version of a pinpoint landing and the various points is shown. Mars s surface has a large amount of craters, which are ideal landmarks . According to literatures on Martian cratering, 60 % of Martian surface is heavily cratered. The ideal (craters) landmarks for pinpoint landing will be between 1000 to 50 meters in diagonal The ideal altitude for position estimation should greater than 2 km above the ground. The algorithms used to detect and match craters are reviewed.

  8. Sensor-less force-reflecting macro-micro telemanipulation systems by piezoelectric actuators.

    PubMed

    Amini, H; Farzaneh, B; Azimifar, F; Sarhan, A A D

    2016-09-01

    This paper establishes a novel control strategy for a nonlinear bilateral macro-micro teleoperation system with time delay. Besides position and velocity signals, force signals are additionally utilized in the control scheme. This modification significantly improves the poor transparency during contact with the environment. To eliminate external force measurement, a force estimation algorithm is proposed for the master and slave robots. The closed loop stability of the nonlinear micro-micro teleoperation system with the proposed control scheme is investigated employing the Lyapunov theory. Consequently, the experimental results verify the efficiency of the new control scheme in free motion and during collision between the slave robot and the environment of slave robot with environment, and the efficiency of the force estimation algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A Two-Radius Circular Array Method: Extracting Independent Information on Phase Velocities of Love Waves From Microtremor Records From a Simple Seismic Array

    NASA Astrophysics Data System (ADS)

    Tada, T.; Cho, I.; Shinozaki, Y.

    2005-12-01

    We have invented a Two-Radius (TR) circular array method of microtremor exploration, an algorithm that enables to estimate phase velocities of Love waves by analyzing horizontal-component records of microtremors that are obtained with an array of seismic sensors placed around circumferences of two different radii. The data recording may be done either simultaneously around the two circles or in two separate sessions with sensors distributed around each circle. Both Rayleigh and Love waves are present in the horizontal components of microtremors, but in the data processing of our TR method, all information on the Rayleigh waves ends up cancelled out, and information on the Love waves alone are left to be analyzed. Also, unlike the popularly used frequency-wavenumber spectral (F-K) method, our TR method does not resolve individual plane-wave components arriving from different directions and analyze their "vector" phase velocities, but instead directly evaluates their "scalar" phase velocities --- phase velocities that contain no information on the arrival direction of waves --- through a mathematical procedure which involves azimuthal averaging. The latter feature leads us to expect that, with our TR method, it is possible to conduct phase velocity analysis with smaller numbers of sensors, with higher stability, and up to longer-wavelength ranges than with the F-K method. With a view to investigating the capabilities and limitations of our TR method in practical implementation to real data, we have deployed circular seismic arrays of different sizes at a test site in Japan where the underground structure is well documented through geophysical exploration. Ten seismic sensors were placed equidistantly around two circumferences, five around each circle, with varying combinations of radii ranging from several meters to several tens of meters, and simultaneous records of microtremors around circles of two different radii were analyzed with our TR method to produce estimates for the phase velocities of Love waves. The estimates were then checked against "model" phase velocities that are derived from theoretical calculations. We have also conducted a check of the estimated spectral ratios against the "model" spectral ratios, where we mean by "spectral ratio" an intermediary quantity that is calculated from observed records prior to the estimation of the phase velocity in the data analysis procedure of our TR method. In most cases, the estimated phase velocities coincided well with the model phase velocities within a wavelength range extending roughly from 3r to 6r (r: array radius). It was found out that, outside the upper and lower resolution limits of the TR method, the discrepancy between the estimated and model phase velocities, as well as the discrepancy between the estimated and model spectral ratios, were accounted for satisfactorily by theoretical consideration of three factors: the presence of higher surface-wave modes, directional aliasing effects related to the finite number of sensors in the seismic array, and the presence of incoherent noise.

  10. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  11. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  12. Evaluating the Real-time and Offline Performance of the Virtual Seismologist Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.

    2009-04-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4 acceptable picks to be available, and thus are heavily influenced by the station density in a given region; these initial estimate times also include the effects of telemetry delay, which ranges between 6 and 15 seconds at the SCSN, and processing time (~1 second). Other relevant performance statistics include: 95% of initial real-time location estimates are within 20 km of the actual epicenter, 97% of initial real-time magnitude estimates are within one magnitude unit of the network magnitude. Extension of real-time VS operations to networks in Northern California is an on-going effort. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMS). We discuss the performance of the VS algorithm on these datasets in terms of magnitude, location, and ground motion estimation.

  13. A new contrast-assisted method in microcirculation volumetric flow assessment

    NASA Astrophysics Data System (ADS)

    Lu, Sheng-Yi; Chen, Yung-Sheng; Yeh, Chih-Kuang

    2007-03-01

    Microcirculation volumetric flow rate is a significant index in diseases diagnosis and treatment such as diabetes and cancer. In this study, we propose an integrated algorithm to assess microcirculation volumetric flow rate including estimation of blood perfused area and corresponding flow velocity maps based on high frequency destruction/contrast replenishment imaging technique. The perfused area indicates the blood flow regions including capillaries, arterioles and venules. Due to the echo variance changes between ultrasonic contrast agents (UCAs) pre- and post-destruction two images, the perfused area can be estimated by the correlation-based approach. The flow velocity distribution within the perfused area can be estimated by refilling time-intensity curves (TICs) after UCAs destruction. Most studies introduced the rising exponential model proposed by Wei (1998) to fit the TICs. Nevertheless, we found the TICs profile has a great resemblance to sigmoid function in simulations and in vitro experiments results. Good fitting correlation reveals that sigmoid model was more close to actual fact in describing destruction/contrast replenishment phenomenon. We derived that the saddle point of sigmoid model is proportional to blood flow velocity. A strong linear relationship (R = 0.97) between the actual flow velocities (0.4-2.1 mm/s) and the estimated saddle constants was found in M-mode and B-mode flow phantom experiments. Potential applications of this technique include high-resolution volumetric flow rate assessment in small animal tumor and the evaluation of superficial vasculature in clinical studies.

  14. Estimating crustal heterogeneity from double-difference tomography

    USGS Publications Warehouse

    Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.

    2006-01-01

    Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.

  15. An efficient algorithm for double-difference tomography and location in heterogeneous media, with an application to the Kilauea volcano

    USGS Publications Warehouse

    Monteiller, V.; Got, J.-L.; Virieux, J.; Okubo, P.

    2005-01-01

    Improving our understanding of crustal processes requires a better knowledge of the geometry and the position of geological bodies. In this study we have designed a method based upon double-difference relocation and tomography to image, as accurately as possible, a heterogeneous medium containing seismogenic objects. Our approach consisted not only of incorporating double difference in tomography but also partly in revisiting tomographic schemes for choosing accurate and stable numerical strategies, adapted to the use of cross-spectral time delays. We used a finite difference solution to the eikonal equation for travel time computation and a Tarantola-Valette approach for both the classical and double-difference three-dimensional tomographic inversion to find accurate earthquake locations and seismic velocity estimates. We estimated efficiently the square root of the inverse model's covariance matrix in the case of a Gaussian correlation function. It allows the use of correlation length and a priori model variance criteria to determine the optimal solution. Double-difference relocation of similar earthquakes is performed in the optimal velocity model, making absolute and relative locations less biased by the velocity model. Double-difference tomography is achieved by using high-accuracy time delay measurements. These algorithms have been applied to earthquake data recorded in the vicinity of Kilauea and Mauna Loa volcanoes for imaging the volcanic structures. Stable and detailed velocity models are obtained: the regional tomography unambiguously highlights the structure of the island of Hawaii and the double-difference tomography shows a detailed image of the southern Kilauea caldera-upper east rift zone magmatic complex. Copyright 2005 by the American Geophysical Union.

  16. Numerical simulation and experimental validation of Lamb wave propagation behavior in composite plates

    NASA Astrophysics Data System (ADS)

    Kim, Sungwon; Uprety, Bibhisha; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Structural Health Monitoring (SHM) based on Acoustic Emission (AE) is dependent on both the sensors to detect an impact event as well as an algorithm to determine the impact location. The propagation of Lamb waves produced by an impact event in thin composite structures is affected by several unique aspects including material anisotropy, ply orientations, and geometric discontinuities within the structure. The development of accurate numerical models of Lamb wave propagation has important benefits towards the development of AE-based SHM systems for impact location estimation. Currently, many impact location algorithms utilize the time of arrival or velocities of Lamb waves. Therefore the numerical prediction of characteristic wave velocities is of great interest. Additionally, the propagation of the initial symmetric (S0) and asymmetric (A0) wave modes is important, as these wave modes are used for time of arrival estimation. In this investigation, finite element analyses were performed to investigate aspects of Lamb wave propagation in composite plates with active signal excitation. A comparative evaluation of two three-dimensional modeling approaches was performed, with emphasis placed on the propagation and velocity of both the S0 and A0 wave modes. Results from numerical simulations are compared to experimental results obtained from active AE testing. Of particular interest is the directional dependence of Lamb waves in quasi-isotropic carbon/epoxy composite plates. Numerical and experimental results suggest that although a quasi-isotropic composite plate may have the same effective elastic modulus in all in-plane directions, the Lamb wave velocity may have some directional dependence. Further numerical analyses were performed to investigate Lamb wave propagation associated with circular cutouts in composite plates.

  17. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    PubMed

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.

  18. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  19. Evolutionary design of a generalized polynomial neural network for modelling sediment transport in clean pipes

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein; Khoshbin, Fatemeh

    2016-10-01

    To determine the minimum velocity required to prevent sedimentation, six different models were proposed to estimate the densimetric Froude number (Fr). The dimensionless parameters of the models were applied along with a combination of the group method of data handling (GMDH) and the multi-target genetic algorithm. Therefore, an evolutionary design of the generalized GMDH was developed using a genetic algorithm with a specific coding scheme so as not to restrict connectivity configurations to abutting layers only. In addition, a new preserving mechanism by the multi-target genetic algorithm was utilized for the Pareto optimization of GMDH. The results indicated that the most accurate model was the one that used the volumetric concentration of sediment (CV), relative hydraulic radius (d/R), dimensionless particle number (Dgr) and overall sediment friction factor (λs) in estimating Fr. Furthermore, the comparison between the proposed method and traditional equations indicated that GMDH is more accurate than existing equations.

  20. Improving estimations of greenhouse gas transfer velocities by atmosphere-ocean couplers in Earth-System and regional models

    NASA Astrophysics Data System (ADS)

    Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.

    2015-09-01

    Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.

  1. Glacier surface velocity estimation in the West Kunlun Mountain range from L-band ALOS/PALSAR images using modified synthetic aperture radar offset-tracking procedure

    NASA Astrophysics Data System (ADS)

    Ruan, Zhixing; Guo, Huadong; Liu, Guang; Yan, Shiyong

    2014-01-01

    Glacier movement is closely related to changes in climatic, hydrological, and geological factors. However, detecting glacier surface flow velocity with conventional ground surveys is challenging. Remote sensing techniques, especially synthetic aperture radar (SAR), provide regular observations covering larger-scale glacier regions. Glacier surface flow velocity in the West Kunlun Mountains using modified offset-tracking techniques based on ALOS/PALSAR images is estimated. Three maps of glacier flow velocity for the period 2007 to 2010 are derived from procedures of offset detection using cross correlation in the Fourier domain and global offset elimination of thin plate smooth splines. Our results indicate that, on average, winter glacier motion on the North Slope is 1 cm/day faster than on the South Slope-a result which corresponds well with the local topography. The performance of our method as regards the reliability of extracted displacements and the robustness of this algorithm are discussed. The SAR-based offset tracking is proven to be reliable and robust, making it possible to investigate comprehensive glacier movement and its response mechanism to environmental change.

  2. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    NASA Astrophysics Data System (ADS)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  3. Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.

    PubMed

    Shen, Shijian; Nie, Xin; Zhang, Xinggan

    2018-02-03

    Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.

  4. A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers

    NASA Astrophysics Data System (ADS)

    Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley

    2017-06-01

    Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.

  5. Testing the accuracy of redshift-space group-finding algorithms

    NASA Astrophysics Data System (ADS)

    Frederic, James J.

    1995-04-01

    Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.

  6. Spacecraft angular velocity estimation algorithm for star tracker based on optical flow techniques

    NASA Astrophysics Data System (ADS)

    Tang, Yujie; Li, Jian; Wang, Gangyi

    2018-02-01

    An integrated navigation system often uses the traditional gyro and star tracker for high precision navigation with the shortcomings of large volume, heavy weight and high-cost. With the development of autonomous navigation for deep space and small spacecraft, star tracker has been gradually used for attitude calculation and angular velocity measurement directly. At the same time, with the dynamic imaging requirements of remote sensing satellites and other imaging satellites, how to measure the angular velocity in the dynamic situation to improve the accuracy of the star tracker is the hotspot of future research. We propose the approach to measure angular rate with a nongyro and improve the dynamic performance of the star tracker. First, the star extraction algorithm based on morphology is used to extract the star region, and the stars in the two images are matched according to the method of angular distance voting. The calculation of the displacement of the star image is measured by the improved optical flow method. Finally, the triaxial angular velocity of the star tracker is calculated by the star vector using the least squares method. The method has the advantages of fast matching speed, strong antinoise ability, and good dynamic performance. The triaxial angular velocity of star tracker can be obtained accurately with these methods. So, the star tracker can achieve better tracking performance and dynamic attitude positioning accuracy to lay a good foundation for the wide application of various satellites and complex space missions.

  7. Rover Slip Validation and Prediction Algorithm

    NASA Technical Reports Server (NTRS)

    Yen, Jeng

    2009-01-01

    A physical-based simulation has been developed for the Mars Exploration Rover (MER) mission that applies a slope-induced wheel-slippage to the rover location estimator. Using the digital elevation map from the stereo images, the computational method resolves the quasi-dynamic equations of motion that incorporate the actual wheel-terrain speed to estimate the gross velocity of the vehicle. Based on the empirical slippage measured by the Visual Odometry software of the rover, this algorithm computes two factors for the slip model by minimizing the distance of the predicted and actual vehicle location, and then uses the model to predict the next drives. This technique, which has been deployed to operate the MER rovers in the extended mission periods, can accurately predict the rover position and attitude, mitigating the risk and uncertainties in the path planning on high-slope areas.

  8. Anomaly Detection in Test Equipment via Sliding Mode Observers

    NASA Technical Reports Server (NTRS)

    Solano, Wanda M.; Drakunov, Sergey V.

    2012-01-01

    Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.

  9. Developing Improved Water Velocity and Flux Estimation from AUVs - Results From Recent ASTEP Field Programs

    NASA Astrophysics Data System (ADS)

    Kinsey, J. C.; Yoerger, D. R.; Camilli, R.; German, C. R.

    2010-12-01

    Water velocity measurements are crucial to quantifying fluxes and better understanding water as a fundamental transport mechanism for marine chemical and biological processes. The importance of flux to understanding these processes makes it a crucial component of astrobiological exploration to moons possessing large bodies of water, such as Europa. Present technology allows us to obtain submerged water velocity measurements from stationary platforms; rarer are measurements from submerged vehicles which possess the ability to autonomously survey tens of kilometers over extended periods. Improving this capability would also allow us to obtain co-registered water velocity and other sensor data (e.g., mass spectrometers, temperature, oxygen, etc) and significantly enhance our ability to estimate fluxes. We report results from 4 recent expeditions in which we measured water velocities from autonomous underwater vehicles (AUVs) to help quantify flux in three different oceanographic contexts: hydrothermal vent plumes; an oil spill cruise responding to the 2010 Deepwater Horizon blowout; and two expeditions investigating naturally occurring methane seeps. On all of these cruises, we directly measured the water velocities with an acoustic Doppler current profiler (ADCP) mounted on the AUV. Vehicle motion was corrected for using bottom-lock Doppler tracks when available and, in the absence of bottom-lock, estimates of vehicle velocity based on dynamic models. In addition, on the methane seep cruises, we explored the potential of using acoustic mapping sonars, such as multi-beam and sub-bottom profiling systems, to localize plumes and indirectly quantify flux. Data obtained on these expeditions enhanced our scientific investigations and provides data for future development of algorithms for autonomously processing, identifying, and classifying water velocity and flux measurements. Such technology will be crucial in future astrobiology missions where highly constrained bandwidth will require robots to possess sufficient autonomy to process and react to data independent of human interpretation and interaction.

  10. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  11. A Damping Grid Strapdown Inertial Navigation System Based on a Kalman Filter for Ships in Polar Regions.

    PubMed

    Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu

    2017-07-03

    The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.

  12. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  13. Applying Workspace Limitations in a Velocity-Controlled Robotic Mechanism

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Platt, Robert J., Jr. (Inventor)

    2014-01-01

    A robotic system includes a robotic mechanism responsive to velocity control signals, and a permissible workspace defined by a convex-polygon boundary. A host machine determines a position of a reference point on the mechanism with respect to the boundary, and includes an algorithm for enforcing the boundary by automatically shaping the velocity control signals as a function of the position, thereby providing smooth and unperturbed operation of the mechanism along the edges and corners of the boundary. The algorithm is suited for application with higher speeds and/or external forces. A host machine includes an algorithm for enforcing the boundary by shaping the velocity control signals as a function of the reference point position, and a hardware module for executing the algorithm. A method for enforcing the convex-polygon boundary is also provided that shapes a velocity control signal via a host machine as a function of the reference point position.

  14. Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.

    PubMed

    Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L

    2011-01-01

    Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.

  15. A novel JEAnS analysis of the Fornax dwarf using evolutionary algorithms: mass follows light with signs of an off-centre merger

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris

    2017-09-01

    Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.

  16. Hypocenter relocation of microseismic events using a 3-D velocity model of the shale-gas production site in the Horn River Basin

    NASA Astrophysics Data System (ADS)

    Woo, J. U.; Kim, J. H.; Rhie, J.; Kang, T. S.

    2016-12-01

    Microseismic monitoring is a crucial process to evaluate the efficiency of hydro-fracking and to understand the development of fracture networks. Consequently, it can provide valuable information for designing the post hydro-fracking stages and estimating the stimulated rock volumes. The fundamental information is a set of source parameters of microseismic events. The most important parameter is the hypocenter of event, and thus the accurate hypocenter determination is a key for the successful microseismic monitoring. The accuracy of hypocenters for a given dataset of seismic phase arrival times is dependent on that of the velocity model used in the seismic analysis. In this study, we evaluated how a 3-D model can affect the accuracy of hypocenters. We used auto-picked P- and S-wave travel-time data of about 8,000 events at the commercial shale gas production site in the Horn River Basin, Canada. The initial hypocenters of the events were determined using a single-difference linear inversion algorithm with a 1-D velocity model obtained from the well-logging data. Then we iteratively inverted travel times of events for the 3-D velocity perturbations and relocated their hypocenters using double-difference algorithm. Significant reduction of the errors in the final hypocenter was obtained. This result indicates that the 3-D model is useful for improving the performance of microseismic monitoring.

  17. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    NASA Astrophysics Data System (ADS)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  18. Engineering description of the ascent/descent bet product

    NASA Technical Reports Server (NTRS)

    Seacord, A. W., II

    1986-01-01

    The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.

  19. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    NASA Astrophysics Data System (ADS)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  20. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra.

    PubMed

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-28

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  1. Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere

    NASA Astrophysics Data System (ADS)

    Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.

    2017-11-01

    Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.

  2. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  3. Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth

    DTIC Science & Technology

    2010-08-01

    Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against

  4. Optimisation of the mean boat velocity in rowing.

    PubMed

    Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P

    2012-01-01

    In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.

  5. Investigation of Convection and Pressure Treatment with Splitting Techniques

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei; Liou, Meng-Sing

    1995-01-01

    Treatment of convective and pressure fluxes in the Euler and Navier-Stokes equations using splitting formulas for convective velocity and pressure is investigated. Two schemes - controlled variation scheme (CVS) and advection upstream splitting method (AUSM) - are explored for their accuracy in resolving sharp gradients in flows involving moving or reflecting shock waves as well as a one-dimensional combusting flow with a strong heat release source term. For two-dimensional compressible flow computations, these two schemes are implemented in one of the pressure-based algorithms, whose very basis is the separate treatment of convective and pressure fluxes. For the convective fluxes in the momentum equations as well as the estimation of mass fluxes in the pressure correction equation (which is derived from the momentum and continuity equations) of the present algorithm, both first- and second-order (with minmod limiter) flux estimations are employed. Some issues resulting from the conventional use in pressure-based methods of a staggered grid, for the location of velocity components and pressure, are also addressed. Using the second-order fluxes, both CVS and AUSM type schemes exhibit sharp resolution. Overall, the combination of upwinding and splitting for the convective and pressure fluxes separately exhibits robust performance for a variety of flows and is particularly amenable for adoption in pressure-based methods.

  6. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study.

    PubMed

    Shtark, Tomer; Gurfil, Pini

    2017-03-31

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control.

  7. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study

    PubMed Central

    Shtark, Tomer; Gurfil, Pini

    2017-01-01

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control. PMID:28362338

  8. Rhythmic Extended Kalman Filter for Gait Rehabilitation Motion Estimation and Segmentation.

    PubMed

    Joukov, Vladimir; Bonnet, Vincent; Karg, Michelle; Venture, Gentiane; Kulic, Dana

    2018-02-01

    This paper proposes a method to enable the use of non-intrusive, small, wearable, and wireless sensors to estimate the pose of the lower body during gait and other periodic motions and to extract objective performance measures useful for physiotherapy. The Rhythmic Extended Kalman Filter (Rhythmic-EKF) algorithm is developed to estimate the pose, learn an individualized model of periodic movement over time, and use the learned model to improve pose estimation. The proposed approach learns a canonical dynamical system model of the movement during online observation, which is used to accurately model the acceleration during pose estimation. The canonical dynamical system models the motion as a periodic signal. The estimated phase and frequency of the motion also allow the proposed approach to segment the motion into repetitions and extract useful features, such as gait symmetry, step length, and mean joint movement and variance. The algorithm is shown to outperform the extended Kalman filter in simulation, on healthy participant data, and stroke patient data. For the healthy participant marching dataset, the Rhythmic-EKF improves joint acceleration and velocity estimates over regular EKF by 40% and 37%, respectively, estimates joint angles with 2.4° root mean squared error, and segments the motion into repetitions with 96% accuracy.

  9. A modified Holly-Preissmann scheme for simulating sharp concentration fronts in streams with steep velocity gradients using RIV1Q

    NASA Astrophysics Data System (ADS)

    Liu, Zhao-wei; Zhu, De-jun; Chen, Yong-can; Wang, Zhi-gang

    2014-12-01

    RIV1Q is the stand-alone water quality program of CE-QUAL-RIV1, a hydraulic and water quality model developed by U.S. Army Corps of Engineers Waterways Experiment Station. It utilizes an operator-splitting algorithm and the advection term in governing equation is treated using the explicit two-point, fourth-order accurate, Holly-Preissmann scheme, in order to preserve numerical accuracy for advection of sharp gradients in concentration. In the scheme, the spatial derivative of the transport equation, where the derivative of velocity is included, is introduced to update the first derivative of dependent variable. In the stream with larger cross-sectional variation, steep velocity gradient can be easily found and should be estimated correctly. In the original version of RIV1Q, however, the derivative of velocity is approximated by a finite difference which is first-order accurate. Its leading truncation error leads to the numerical error of concentration which is related with the velocity and concentration gradients and increases with the decreasing Courant number. The simulation may also be unstable when a sharp velocity drop occurs. In the present paper, the derivative of velocity is estimated with a modified second-order accurate scheme and the corresponding numerical error of concentration decreases. Additionally, the stability of the simulation is improved. The modified scheme is verified with a hypothetical channel case and the results demonstrate that satisfactory accuracy and stability can be achieved even when the Courant number is very low. Finally, the applicability of the modified scheme is discussed.

  10. Propagation of the velocity model uncertainties to the seismic event location

    NASA Astrophysics Data System (ADS)

    Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.

    2015-01-01

    Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.

  11. Multi-object tracking of human spermatozoa

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen

    2008-03-01

    We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.

  12. Operational space trajectory tracking control of robot manipulators endowed with a primary controller of synthetic joint velocity.

    PubMed

    Moreno-Valenzuela, Javier; González-Hernández, Luis

    2011-01-01

    In this paper, a new control algorithm for operational space trajectory tracking control of robot arms is introduced. The new algorithm does not require velocity measurement and is based on (1) a primary controller which incorporates an algorithm to obtain synthesized velocity from joint position measurements and (2) a secondary controller which computes the desired joint acceleration and velocity required to achieve operational space motion control. The theory of singularly perturbed systems is crucial for the analysis of the closed-loop system trajectories. In addition, the practical viability of the proposed algorithm is explored through real-time experiments in a two degrees-of-freedom horizontal planar direct-drive arm. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  13. User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.

    1988-01-01

    Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.

  14. A real-time algorithm for integrating differential satellite and inertial navigation information during helicopter approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hoang, TY

    1994-01-01

    A real-time, high-rate precision navigation Kalman filter algorithm is developed and analyzed. This Navigation algorithm blends various navigation data collected during terminal area approach of an instrumented helicopter. Navigation data collected include helicopter position and velocity from a global position system in differential mode (DGPS) as well as helicopter velocity and attitude from an inertial navigation system (INS). The goal of the Navigation algorithm is to increase the DGPS accuracy while producing navigational data at the 64 Hertz INS update rate. It is important to note that while the data was post flight processed, the Navigation algorithm was designed for real-time analysis. The design of the Navigation algorithm resulted in a nine-state Kalman filter. The Kalman filter's state matrix contains position, velocity, and velocity bias components. The filter updates positional readings with DGPS position, INS velocity, and velocity bias information. In addition, the filter incorporates a sporadic data rejection scheme. This relatively simple model met and exceeded the ten meter absolute positional requirement. The Navigation algorithm results were compared with truth data derived from a laser tracker. The helicopter flight profile included terminal glideslope angles of 3, 6, and 9 degrees. Two flight segments extracted during each terminal approach were used to evaluate the Navigation algorithm. The first segment recorded small dynamic maneuver in the lateral plane while motion in the vertical plane was recorded by the second segment. The longitudinal, lateral, and vertical averaged positional accuracies for all three glideslope approaches are as follows (mean plus or minus two standard deviations in meters): longitudinal (-0.03 plus or minus 1.41), lateral (-1.29 plus or minus 2.36), and vertical (-0.76 plus or minus 2.05).

  15. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  16. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  17. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  18. Nonlinear calibration for petroleum water content measurement using PSO

    NASA Astrophysics Data System (ADS)

    Li, Mingbao; Zhang, Jiawei

    2008-10-01

    A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.

  19. Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models

    NASA Astrophysics Data System (ADS)

    Shen, C.; Xia, J.; Mi, B.

    2016-12-01

    A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.

  20. A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows

    NASA Astrophysics Data System (ADS)

    Cardwell, Nicholas D.; Vlachos, Pavlos P.; Thole, Karen A.

    2011-10-01

    Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas-solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of instantaneous particle slip velocities, illustrating the algorithm's strength to robustly and accurately resolve polydispersed MPFs.

  1. Microseismic Velocity Imaging of the Fracturing Zone

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Chen, Y.

    2015-12-01

    Hydraulic fracturing of low permeability reservoirs can induce microseismic events during fracture development. For this reason, microseismic monitoring using sensors on surface or in borehole have been widely used to delineate fracture spatial distribution and to understand fracturing mechanisms. It is often the case that the stimulated reservoir volume (SRV) is determined solely based on microseismic locations. However, it is known that for some fracture development stage, long period long duration events, instead of microseismic events may be associated. In addition, because microseismic events are essentially weak and there exist different sources of noise during monitoring, some microseismic events could not be detected and thus located. Therefore the estimation of the SRV is biased if it is solely determined by microseismic locations. With the existence of fluids and fractures, the seismic velocity of reservoir layers will be decreased. Based on this fact, we have developed a near real time seismic velocity tomography method to characterize velocity changes associated with fracturing process. The method is based on double-difference seismic tomography algorithm to image the fracturing zone where microseismic events occur by using differential arrival times from microseismic event pairs. To take into account varying data distribution for different fracking stages, the method solves the velocity model in the wavelet domain so that different scales of model features can be obtained according to different data distribution. We have applied this real time tomography method to both acoustic emission data from lab experiment and microseismic data from a downhole microseismic monitoring project for shale gas hydraulic fracturing treatment. The tomography results from lab data clearly show the velocity changes associated with different rock fracturing stages. For the field data application, it shows that microseismic events are located in low velocity anomalies. By combining low velocity anomalies with microseismic events, we should better estimate the SRV.

  2. Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.

    PubMed

    Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian

    2009-10-01

    In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.

  3. Spectral analysis of stellar light curves by means of neural networks

    NASA Astrophysics Data System (ADS)

    Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.

    1999-06-01

    Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.

  4. Adaptive Formation Control of Electrically Driven Nonholonomic Mobile Robots With Limited Information.

    PubMed

    Bong Seok Park; Jin Bae Park; Yoon Ho Choi

    2011-08-01

    We present a leader-follower-based adaptive formation control method for electrically driven nonholonomic mobile robots with limited information. First, an adaptive observer is developed under the condition that the velocity measurement is not available. With the proposed adaptive observer, the formation control part is designed to achieve the desired formation and guarantee the collision avoidance. In addition, neural network is employed to compensate the actuator saturation, and the projection algorithm is used to estimate the velocity information of the leader. It is shown, by using the Lyapunov theory, that all errors of the closed-loop system are uniformly ultimately bounded. Simulation results are presented to illustrate the performance of the proposed control system.

  5. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    PubMed

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  6. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  7. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  8. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  9. iss055e008318

    NASA Image and Video Library

    2018-04-02

    iss055e008318 (April 2, 2018) --- Expedition 55 Flight Engineer Drew Feustel works inside the Japanese Kibo laboratory module with tiny internal satellites known as SPHERES, or Synchronized Position Hold, Engage, Reorient, Experimental Satellites. Feustel was operating the SPHERES for the Smoothing-Based Relative Navigation (SmoothNav) experiment which is developing an algorithm to obtain the most probable estimate of the relative positions and velocities between all spacecraft using all available sensor information, including past measurements.

  10. An adaptive Bayesian inversion for upper-mantle structure using surface waves and scattered body waves

    NASA Astrophysics Data System (ADS)

    Eilon, Zachary; Fischer, Karen M.; Dalton, Colleen A.

    2018-07-01

    We present a methodology for 1-D imaging of upper-mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parametrization based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing the recent expansion of large seismic arrays and computational power alongside sophisticated data analysis. Careful processing of P- and S-wave arrivals isolates converted phases generated at velocity gradients between the mid-crust and 300 km depth. This data is allied with ambient noise and earthquake Rayleigh wave phase velocities to obtain detailed VS and VP velocity models. Synthetic tests demonstrate that converted phases are necessary to accurately constrain velocity gradients, and S-p phases are particularly important for resolving mantle structure, while surface waves are necessary for capturing absolute velocities. We apply the method to several stations in the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles, offering robust uncertainty estimates, and revealing mid-lithospheric velocity gradients indicative of thermochemical cratonic layering. This flexible method holds promise for increasingly detailed understanding of the upper mantle.

  11. An adaptive Bayesian inversion for upper mantle structure using surface waves and scattered body waves

    NASA Astrophysics Data System (ADS)

    Eilon, Zachary; Fischer, Karen M.; Dalton, Colleen A.

    2018-04-01

    We present a methodology for 1-D imaging of upper mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parameterisation based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing the recent expansion of large seismic arrays and computational power alongside sophisticated data analysis. Careful processing of P- and S-wave arrivals isolates converted phases generated at velocity gradients between the mid-crust and 300 km depth. This data is allied with ambient noise and earthquake Rayleigh wave phase velocities to obtain detailed VS and VP velocity models. Synthetic tests demonstrate that converted phases are necessary to accurately constrain velocity gradients, and S-p phases are particularly important for resolving mantle structure, while surface waves are necessary for capturing absolute velocities. We apply the method to several stations in the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles, offering robust uncertainty estimates, and revealing mid-lithospheric velocity gradients indicative of thermochemical cratonic layering. This flexible method holds promise for increasingly detailed understanding of the upper mantle.

  12. Experimental investigation of the velocity field in buoyant diffusion flames using PIV and TPIV algorithm

    Treesearch

    L. Sun; X. Zhou; S.M. Mahalingam; D.R. Weise

    2005-01-01

    We investigated a simultaneous temporally and spatially resolved 2-D velocity field above a burning circular pan of alcohol using particle image velocimetry (PIV). The results obtained from PIV were used to assess a thermal particle image velocimetry (TPIV) algorithm previously developed to approximate the velocity field using the temperature field, simultaneously...

  13. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

    1997-01-01

    The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate navigation algorithms implemented on GEODE are also discussed. In addition, recommendations for generalization of GEAS functions and for new techniques to optimize the accuracy and control of the GPS autonomous onboard navigation are presented.

  14. Crustal structure beneath two seismic stations in the Sunda-Banda arc transition zone derived from receiver function analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syuhada, E-mail: hadda9@gmail.com; Research Centre for Physics - Indonesian Institute of Sciences; Hananto, Nugroho D.

    2015-04-24

    We analyzed receiver functions to estimate the crustal thickness and velocity structure beneath two stations of Geofon (GE) network in the Sunda-Banda arc transition zone. The stations are located in two different tectonic regimes: Sumbawa Island (station PLAI) and Timor Island (station SOEI) representing the oceanic and continental characters, respectively. We analyzed teleseismic events of 80 earthquakes to calculate the receiver functions using the time-domain iterative deconvolution technique. We employed 2D grid search (H-κ) algorithm based on the Moho interaction phases to estimate crustal thickness and Vp/Vs ratio. We also derived the S-wave velocity variation with depth beneath both stationsmore » by inverting the receiver functions. We obtained that beneath station PLAI the crustal thickness is about 27.8 km with Vp/Vs ratio 2.01. As station SOEI is covered by very thick low-velocity sediment causing unstable solution for the inversion, we modified the initial velocity model by adding the sediment thickness estimated using high frequency content of receiver functions in H-κ stacking process. We obtained the crustal thickness is about 37 km with VP/Vs ratio 2.2 beneath station SOEI. We suggest that the high Vp/Vs in station PLAI may indicate the presence of fluid ascending from the subducted plate to the volcanic arc, whereas the high Vp/Vs in station SOEI could be due to the presence of sediment and rich mafic composition in the upper crust and possibly related to the serpentinization process in the lower crust. We also suggest that the difference in velocity models and crustal thicknesses between stations PLAI and SOEI are consistent with their contrasting tectonic environments.« less

  15. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  16. Safe Maritime Navigation with COLREGS Using Velocity Obstacles

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki; Wolf, Michael T.; Zarzhitsky, Dimitri; Huntsberger, Terrance L.

    2011-01-01

    This paper presents a motion planning algorithm for Unmanned Surface Vehicles (USVs) to navigate safely in dynamic, cluttered environments. The proposed algorithm not only addresses Hazard Avoidance (HA) for stationary and moving hazards but also applies the International Regulations for Preventing Collisions at Sea (known as COLREGs). The COLREG rules specify, for example, which vessel is responsible for giving way to the other and to which side of the "stand-on" vessel to maneuver. The three primary COLREG rules were considered in this paper: crossing, overtaking, and head-on situations. For USVs to be safely deployed in environments with other traffic boats, it is imperative that the USV's navigation algorithm obey COLREGs. Note also that if other boats disregard their responsibility under COLREGs, the USV will still apply its HA algorithms to avoid a collision. The proposed approach is based on Velocity Obstacles, which generates a cone-shaped obstacle in the velocity space. Because Velocity Obstacles also specify which side of the obstacle the vehicle will pass during the avoidance maneuver, COLREGs are encoded in the velocity space in a natural way. The algorithm is demonstrated via both simulation and on-water tests.

  17. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.

    2004-01-01

    In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.

  18. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  19. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  20. On the Use of a Range Trigger for the Mars Science Laboratory Entry Descent and Landing

    NASA Technical Reports Server (NTRS)

    Way, David W.

    2011-01-01

    In 2012, during the Entry, Descent, and Landing (EDL) of the Mars Science Laboratory (MSL) entry vehicle, a 21.5 m Viking-heritage, Disk-Gap-Band, supersonic parachute will be deployed at approximately Mach 2. The baseline algorithm for commanding this parachute deployment is a navigated planet-relative velocity trigger. This paper compares the performance of an alternative range-to-go trigger (sometimes referred to as Smart Chute ), which can significantly reduce the landing footprint size. Numerical Monte Carlo results, predicted by the POST2 MSL POST End-to-End EDL simulation, are corroborated and explained by applying propagation of uncertainty methods to develop an analytic estimate for the standard deviation of Mach number. A negative correlation is shown to exist between the standard deviations of wind velocity and the planet-relative velocity at parachute deploy, which mitigates the Mach number rise in the case of the range trigger.

  1. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  2. Rapid Detection of Small Movements with GNSS Doppler Observables

    NASA Astrophysics Data System (ADS)

    Hohensinn, Roland; Geiger, Alain

    2017-04-01

    High-alpine terrain reacts very sensitively to varying environmental conditions. As an example, increasing temperatures cause thawing of permafrost areas. This, in turn causes an increasing threat by natural hazards like debris flow (e.g. rock glaciers) or rockfalls. The Institute of Geodesy and Photogrammetry is contributing to alpine mass-movement monitoring systems in different project areas in the Swiss Alps. A main focus lies on providing geodetic mass-movement information derived from GNSS static solutions on a daily and a sub-daily basis, obtained with low-cost and autonomous GNSS stations. Another focus is set on rapidly providing reliable geodetic information in real-time i.e. for an integration in early warning systems. One way to achieve this is the estimation of accurate station velocities from observations of range rates, which can be obtained as Doppler observables from time derivatives of carrier phase measurements. The key for this method lies in a precise modeling of prominent effects contributing to the observed range rates, which are satellite velocity, atmospheric delay rates and relativistic effects. A suitable observation model is then devised, which accounts for these predictions. The observation model, combined with a simple kinematic movement model forms the basis for the parameter estimation. Based on the estimated station velocities, movements are then detected using a statistical test. To improve the reliablity of the estimated parameters, another spotlight is set on an on-line quality control procedure. We will present the basic algorithms as well as results from first tests which were carried out with a low-cost GPS L1 phase receiver. With a u-blox module and a sampling rate of 5 Hz, accuracies on the mm/s level can be obtained and velocities down to 1 cm/s can be detected. Reliable and accurate station velocities and movement information can be provided within seconds.

  3. Velocity of climate change algorithms for guiding conservation and management.

    PubMed

    Hamann, Andreas; Roberts, David R; Barber, Quinn E; Carroll, Carlos; Nielsen, Scott E

    2015-02-01

    The velocity of climate change is an elegant analytical concept that can be used to evaluate the exposure of organisms to climate change. In essence, one divides the rate of climate change by the rate of spatial climate variability to obtain a speed at which species must migrate over the surface of the earth to maintain constant climate conditions. However, to apply the algorithm for conservation and management purposes, additional information is needed to improve realism at local scales. For example, destination information is needed to ensure that vectors describing speed and direction of required migration do not point toward a climatic cul-de-sac by pointing beyond mountain tops. Here, we present an analytical approach that conforms to standard velocity algorithms if climate equivalents are nearby. Otherwise, the algorithm extends the search for climate refugia, which can be expanded to search for multivariate climate matches. With source and destination information available, forward and backward velocities can be calculated allowing useful inferences about conservation of species (present-to-future velocities) and management of species populations (future-to-present velocities). © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  4. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    NASA Astrophysics Data System (ADS)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic non linear method (NonLinLoc, Lomax, 2001) and the 3D velocity model, derived from the one developed by Patanè et al. (2006) integrated with that obtained by Chiarabba et al. (2004), we obtained the best possible constraint on the location of the focii expressed as a probability density function (PDF) for the hypocenter location in 3D space. As expected, the obtained results, compared with the reference ones, show that the NonLinLoc software (applied to a 3D velocity model) is more reliable than the Hypoellipse code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers are present.

  5. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    NASA Astrophysics Data System (ADS)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    The rich oil reserves of the Gulf of Mexico are buried in deep and ultra-deep waters up to 30,000 feet from the surface. Minerals Management Service (MMS), the federal agency in the U.S. Department of the Interior that manages the nation's oil, natural gas and other mineral resources on the outer continental shelf in federal offshore waters, estimates that the Gulf of Mexico holds 37 billion barrels of "undiscovered, conventionally recoverable" oil, which, at 50/barrel, would be worth approximately 1.85 trillion. These reserves are very difficult to find and reach due to the extreme depths. Technological advances in seismic imaging represent an opportunity to overcome this obstacle by providing more accurate models of the subsurface. Among these technological advances, Reverse Time Migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontier in designing better velocity estimation algorithms. RTM has been widely recognized as the next chapter in seismic exploration, as it can overcome the limitations of current migration methods in imaging complex geologic structures that exist in the Gulf of Mexico. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computer power. RTM needs thirty times the computing power used in exploration today to be commercially viable and widely usable. Therefore, advancing seismic imaging to the next level of precision poses a multi-disciplinary challenge. To overcome these challenges, the Kaleidoscope project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc., and IBM brings together the necessary components of modeling, algorithms and the uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  6. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  7. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  8. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  9. Fast two-position initial alignment for SINS using velocity plus angular rate measurements

    NASA Astrophysics Data System (ADS)

    Chang, Guobin

    2015-10-01

    An improved two-position initial alignment model for strapdown inertial navigation system is proposed. In addition to the velocity, angular rates are incorporated as measurements. The measurement equations in full three channels are derived in both navigation and body frames and the latter of which is found to be preferred. The cross-correlation between the process and the measurement noises is analyzed and addressed in the Kalman filter. The incorporation of the angular rates, without introducing additional device or external signal, speeds up the convergence of estimating the attitudes, especially the heading. In the simulation study, different algorithms are tested with different initial errors, and the advantages of the proposed method compared to the conventional one are validated by the simulation results.

  10. Development of an optimal automatic control law and filter algorithm for steep glideslope capture and glideslope tracking

    NASA Technical Reports Server (NTRS)

    Halyo, N.

    1976-01-01

    A digital automatic control law to capture a steep glideslope and track the glideslope to a specified altitude is developed for the longitudinal/vertical dynamics of a CTOL aircraft using modern estimation and control techniques. The control law uses a constant gain Kalman filter to process guidance information from the microwave landing system, and acceleration from body mounted accelerometer data. The filter outputs navigation data and wind velocity estimates which are used in controlling the aircraft. Results from a digital simulation of the aircraft dynamics and the control law are presented for various wind conditions.

  11. Cerebral palsy characterization by estimating ocular motion

    NASA Astrophysics Data System (ADS)

    González, Jully; Atehortúa, Angélica; Moncayo, Ricardo; Romero, Eduardo

    2017-11-01

    Cerebral palsy (CP) is a large group of motion and posture disorders caused during the fetal or infant brain development. Sensorial impairment is commonly found in children with CP, i.e., between 40-75 percent presents some form of vision problems or disabilities. An automatic characterization of the cerebral palsy is herein presented by estimating the ocular motion during a gaze pursuing task. Specifically, After automatically detecting the eye location, an optical flow algorithm tracks the eye motion following a pre-established visual assignment. Subsequently, the optical flow trajectories are characterized in the velocity-acceleration phase plane. Differences are quantified in a small set of patients between four to ten years.

  12. Applications of the JARS method to study levee sites in southern Texas and southern New Mexico

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Dunbar, J.B.

    2007-01-01

    We apply the joint analysis of refractions with surface waves (JARS) method to several sites and compare its results to traditional refraction-tomography methods in efforts of finding a more realistic solution to the inverse refraction-traveltime problem. The JARS method uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. In all of the cases JARS estimates appear more realistic than those from the conventional refraction-tomography methods. As a result, we consider, the JARS algorithm as the preferred method for finding solutions to the inverse refraction-tomography problems. ?? 2007 Society of Exploration Geophysicists.

  13. A numerical scheme to calculate temperature and salinity dependent air-water transfer velocities for any gas

    NASA Astrophysics Data System (ADS)

    Johnson, M. T.

    2010-10-01

    The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.

  14. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.

    PubMed

    Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang

    2017-01-14

    In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.

  15. Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter

    PubMed Central

    Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang

    2017-01-01

    In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829

  16. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model

    NASA Astrophysics Data System (ADS)

    Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber

    2016-04-01

    In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.

  17. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  18. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  19. Snowfall Rate Retrieval using NPP ATMS Passive Microwave Measurements

    NASA Technical Reports Server (NTRS)

    Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua; Zhao, Limin

    2014-01-01

    Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2014). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. The ATMS SFR product is validated against radar and gauge snowfall data and shows that the ATMS algorithm outperforms the AMSU/MHS SFR.

  20. Remote Evaluation of Rotational Velocity Using a Quadrant Photo-Detector and a DSC Algorithm

    PubMed Central

    Zeng, Xiangkai; Zhu, Zhixiong; Chen, Yang

    2016-01-01

    This paper presents an approach to remotely evaluate the rotational velocity of a measured object by using a quadrant photo-detector and a differential subtraction correlation (DSC) algorithm. The rotational velocity of a rotating object is determined by two temporal-delay numbers at the minima of two DSCs that are derived from the four output signals of the quadrant photo-detector, and the sign of the calculated rotational velocity directly represents the rotational direction. The DSC algorithm does not require any multiplication operations. Experimental calculations were performed to confirm the proposed evaluation method. The calculated rotational velocity, including its amplitude and direction, showed good agreement with the given one, which had an amplitude error of ~0.3%, and had over 1100 times the efficiency of the traditional cross-correlation method in the case of data number N > 4800. The confirmations have shown that the remote evaluation of rotational velocity can be done without any circular division disk, and that it has much fewer error sources, making it simple, accurate and effective for remotely evaluating rotational velocity. PMID:27120607

  1. P and S velocity structure of the crust and the upper mantle beneath central Java from local tomography inversion

    NASA Astrophysics Data System (ADS)

    Koulakov, I.; Bohm, M.; Asch, G.; Lühr, B.-G.; Manzanares, A.; Brotopuspito, K. S.; Fauzi, Pak; Purbawinata, M. A.; Puspito, N. T.; Ratdomopurbo, A.; Kopp, H.; Rabbel, W.; Shevkunova, E.

    2007-08-01

    Here we present the results of local source tomographic inversion beneath central Java. The data set was collected by a temporary seismic network. More than 100 stations were operated for almost half a year. About 13,000 P and S arrival times from 292 events were used to obtain three-dimensional (3-D) Vp, Vs, and Vp/Vs models of the crust and the mantle wedge beneath central Java. Source location and determination of the 3-D velocity models were performed simultaneously based on a new iterative tomographic algorithm, LOTOS-06. Final event locations clearly image the shape of the subduction zone beneath central Java. The dipping angle of the slab increases gradually from almost horizontal to about 70°. A double seismic zone is observed in the slab between 80 and 150 km depth. The most striking feature of the resulting P and S models is a pronounced low-velocity anomaly in the crust, just north of the volcanic arc (Merapi-Lawu anomaly (MLA)). An algorithm for estimation of the amplitude value, which is presented in the paper, shows that the difference between the fore arc and MLA velocities at a depth of 10 km reaches 30% and 36% in P and S models, respectively. The value of the Vp/Vs ratio inside the MLA is more than 1.9. This shows a probable high content of fluids and partial melts within the crust. In the upper mantle we observe an inclined low-velocity anomaly which links the cluster of seismicity at 100 km depth with MLA. This anomaly might reflect ascending paths of fluids released from the slab. The reliability of all these patterns was tested thoroughly.

  2. An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.

    PubMed

    Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas

    2018-01-01

    The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.

  3. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    NASA Astrophysics Data System (ADS)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.

  4. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  5. Development of a Nonlinear Probability of Collision Tool for the Earth Observing System

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2006-01-01

    The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.

  6. Segmentation and tracking in echocardiographic sequences: active contours guided by optical flow estimates

    NASA Technical Reports Server (NTRS)

    Mikic, I.; Krucinski, S.; Thomas, J. D.

    1998-01-01

    This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.

  7. Non-linear identification of a squeeze-film damper

    NASA Technical Reports Server (NTRS)

    Stanway, Roger; Mottershead, John; Firoozian, Riaz

    1987-01-01

    Described is an experimental study to identify the damping laws associated with a squeeze-film vibration damper. This is achieved by using a non-linear filtering algorithm to process displacement responses of the damper ring to synchronous excitation and thus to estimate the parameters in an nth-power velocity model. The experimental facility is described in detail and a representative selection of results is included. The identified models are validated through the prediction of damper-ring orbits and comparison with observed responses.

  8. Direction-of-arrival estimation for a uniform circular acoustic vector-sensor array mounted around a cylindrical baffle

    NASA Astrophysics Data System (ADS)

    Yang, DeSen; Zhu, ZhongRui

    2012-12-01

    This work investigates the direction-of-arrival (DOA) estimation for a uniform circular acoustic Vector-Sensor Array (UCAVSA) mounted around a cylindrical baffle. The total pressure field and the total particle velocity field near the surface of the cylindrical baffle are analyzed theoretically by applying the method of spatial Fourier transform. Then the so-called modal vector-sensor array signal processing algorithm, which is based on the decomposed wavefield representations, for the UCAVSA mounted around the cylindrical baffle is proposed. Simulation and experimental results show that the UCAVSA mounted around the cylindrical baffle has distinct advantages over the same manifold of traditional uniform circular pressure-sensor array (UCPSA). It is pointed out that the acoustic Vector-Sensor (AVS) could be used under the condition of the cylindrical baffle and that the UCAVSA mounted around the cylindrical baffle could also combine the anti-noise performance of the AVS with spatial resolution performance of array system by means of modal vector-sensor array signal processing algorithms.

  9. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinar, Ali; Kolda, Tamara G.; Carlberg, Kevin Thomas

    Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

  10. Optical fiber-based system for continuous measurement of in-bore projectile velocity.

    PubMed

    Wang, Guohua; Sun, Jinglin; Li, Qiang

    2014-08-01

    This paper reports the design of an optical fiber-based velocity measurement system and its application in measuring the in-bore projectile velocity. The measurement principle of the implemented system is based on Doppler effect and heterodyne detection technique. The analysis of the velocity measurement principle deduces the relationship between the projectile velocity and the instantaneous frequency (IF) of the optical fiber-based system output signal. To extract the IF of the fast-changing signal carrying the velocity information, an IF extraction algorithm based on the continuous wavelet transforms is detailed. Besides, the performance of the algorithm is analyzed by performing corresponding simulation. At last, an in-bore projectile velocity measurement experiment with a sniper rifle having a 720 m/s muzzle velocity is performed to verify the feasibility of the optical fiber-based velocity measurement system. Experiment results show that the measured muzzle velocity is 718.61 m/s, and the relative uncertainty of the measured muzzle velocity is approximately 0.021%.

  11. Optical fiber-based system for continuous measurement of in-bore projectile velocity

    NASA Astrophysics Data System (ADS)

    Wang, Guohua; Sun, Jinglin; Li, Qiang

    2014-08-01

    This paper reports the design of an optical fiber-based velocity measurement system and its application in measuring the in-bore projectile velocity. The measurement principle of the implemented system is based on Doppler effect and heterodyne detection technique. The analysis of the velocity measurement principle deduces the relationship between the projectile velocity and the instantaneous frequency (IF) of the optical fiber-based system output signal. To extract the IF of the fast-changing signal carrying the velocity information, an IF extraction algorithm based on the continuous wavelet transforms is detailed. Besides, the performance of the algorithm is analyzed by performing corresponding simulation. At last, an in-bore projectile velocity measurement experiment with a sniper rifle having a 720 m/s muzzle velocity is performed to verify the feasibility of the optical fiber-based velocity measurement system. Experiment results show that the measured muzzle velocity is 718.61 m/s, and the relative uncertainty of the measured muzzle velocity is approximately 0.021%.

  12. Retrieval of Snow and Rain From Combined X- and W-B and Airborne Radar Measurements

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert; Tian, Lin; Heymsfield, Gerald M.

    2008-01-01

    Two independent airborne dual-wavelength techniques, based on nadir measurements of radar reflectivity factors and Doppler velocities, respectively, are investigated with respect to their capability of estimating microphysical properties of hydrometeors. The data used to investigate the methods are taken from the ER-2 Doppler radar (X-band) and Cloud Radar System (W-band) airborne Doppler radars during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers-Florida Area Cirrus Experiment campaign in 2002. Validity is assessed by the degree to which the methods produce consistent retrievals of the microphysics. For deriving snow parameters, the reflectivity-based technique has a clear advantage over the Doppler-velocity-based approach because of the large dynamic range in the dual-frequency ratio (DFR) with respect to the median diameter Do and the fact that the difference in mean Doppler velocity at the two frequencies, i.e., the differential Doppler velocity (DDV), in snow is small relative to the measurement errors and is often not uniquely related to Do. The DFR and DDV can also be used to independently derive Do in rain. At W-band, the DFR-based algorithms are highly sensitive to attenuation from rain, cloud water, and water vapor. Thus, the retrieval algorithms depend on various assumptions regarding these components, whereas the DDV-based approach is unaffected by attenuation. In view of the difficulties and ambiguities associated with the attenuation correction at W-band, the DDV approach in rain is more straightforward and potentially more accurate than the DFR method.

  13. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  14. Estimation of elastic moduli in a compressible Gibson half-space by inverting Rayleigh-wave phase velocity

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Chen, C.

    2006-01-01

    A Gibson half-space model (a non-layered Earth model) has the shear modulus varying linearly with depth in an inhomogeneous elastic half-space. In a half-space of sedimentary granular soil under a geostatic state of initial stress, the density and the Poisson's ratio do not vary considerably with depth. In such an Earth body, the dynamic shear modulus is the parameter that mainly affects the dispersion of propagating waves. We have estimated shear-wave velocities in the compressible Gibson half-space by inverting Rayleigh-wave phase velocities. An analytical dispersion law of Rayleigh-type waves in a compressible Gibson half-space is given in an algebraic form, which makes our inversion process extremely simple and fast. The convergence of the weighted damping solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Calculation efficiency is achieved by reconstructing a weighted damping solution using singular value decomposition techniques. The main advantage of this algorithm is that only three parameters define the compressible Gibson half-space model. Theoretically, to determine the model by the inversion, only three Rayleigh-wave phase velocities at different frequencies are required. This is useful in practice where Rayleigh-wave energy is only developed in a limited frequency range or at certain frequencies as data acquired at manmade structures such as dams and levees. Two real examples are presented and verified by borehole S-wave velocity measurements. The results of these real examples are also compared with the results of the layered-Earth model. ?? Springer 2006.

  15. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  16. Novel joint TOA/RSSI-based WCE location tracking method without prior knowledge of biological human body tissues.

    PubMed

    Ito, Takahiro; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues.

  17. Non-iterative double-frame 2D/3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.

    2017-09-01

    In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.

  18. Three Dimensional Sheaf of Ultrasound Planes Reconstruction (SOUPR) of Ablated Volumes

    PubMed Central

    Ingle, Atul; Varghese, Tomy

    2014-01-01

    This paper presents an algorithm for three dimensional reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radiofrequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full three dimensional rendering of the ablation can then be created from this stack of C-planes; hence the name “Sheaf Of Ultrasound Planes Reconstruction” or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5 dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as 6 imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes. PMID:24808405

  19. Three-dimensional sheaf of ultrasound planes reconstruction (SOUPR) of ablated volumes.

    PubMed

    Ingle, Atul; Varghese, Tomy

    2014-08-01

    This paper presents an algorithm for 3-D reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radio-frequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full 3-D rendering of the ablation can then be created from this stack of C-planes; hence the name "Sheaf Of Ultrasound Planes Reconstruction" or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5 dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as six imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes.

  20. Autonomous Vision Navigation for Spacecraft in Lunar Orbit

    NASA Astrophysics Data System (ADS)

    Bader, Nolan A.

    NASA aims to achieve unprecedented navigational reliability for the first manned lunar mission of the Orion spacecraft in 2023. A technique for accomplishing this is to integrate autonomous feature tracking as an added means of improving position and velocity estimation. In this thesis, a template matching algorithm and optical sensor are tested onboard three simulated lunar trajectories using linear covariance techniques under various conditions. A preliminary characterization of the camera gives insight into its ability to determine azimuth and elevation angles to points on the surface of the Moon. A navigation performance analysis shows that an optical camera sensor can aid in decreasing position and velocity errors, particularly in a loss of communication scenario. Furthermore, it is found that camera quality and computational capability are driving factors affecting the performance of such a system.

  1. Reconstruction of spatial distributions of sound velocity and absorption in soft biological tissues using model ultrasonic tomographic data

    NASA Astrophysics Data System (ADS)

    Burov, V. A.; Zotov, D. I.; Rumyantseva, O. D.

    2014-07-01

    A two-step algorithm is used to reconstruct the spatial distributions of the acoustic characteristics of soft biological tissues-the sound velocity and absorption coefficient. Knowing these distributions is urgent for early detection of benign and malignant neoplasms in biological tissues, primarily in the breast. At the first stage, large-scale distributions are estimated; at the second step, they are refined with a high resolution. Results of reconstruction on the base of model initial data are presented. The principal necessity of preliminary reconstruction of large-scale distributions followed by their being taken into account at the second step is illustrated. The use of CUDA technology for processing makes it possible to obtain final images of 1024 × 1024 samples in only a few minutes.

  2. Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”

    PubMed Central

    Schroeder, Christopher L.; Hartmann, Mitra J. Z.

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641

  3. Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".

    PubMed

    Schroeder, Christopher L; Hartmann, Mitra J Z

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.

  4. Digital signal processing for velocity measurements in dynamical material's behaviour studies.

    PubMed

    Devlaminck, Julien; Luc, Jérôme; Chanal, Pierre-Yves

    2014-03-01

    In this work, we describe different configurations of optical fiber interferometers (types Michelson and Mach-Zehnder) used to measure velocities during dynamical material's behaviour studies. We detail the algorithms of processing developed and optimized to improve the performance of these interferometers especially in terms of time and frequency resolutions. Three methods of analysis of interferometric signals were studied. For Michelson interferometers, the time-frequency analysis of signals by Short-Time Fourier Transform (STFT) is compared to a time-frequency analysis by Continuous Wavelet Transform (CWT). The results have shown that the CWT was more suitable than the STFT for signals with low signal-to-noise, and low velocity and high acceleration areas. For Mach-Zehnder interferometers, the measurement is carried out by analyzing the phase shift between three interferometric signals (Triature processing). These three methods of digital signal processing were evaluated, their measurement uncertainties estimated, and their restrictions or operational limitations specified from experimental results performed on a pulsed power machine.

  5. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  6. A Mobile Kalman-Filter Based Solution for the Real-Time Estimation of Spatio-Temporal Gait Parameters.

    PubMed

    Ferrari, Alberto; Ginis, Pieter; Hardegger, Michael; Casamassima, Filippo; Rocchi, Laura; Chiari, Lorenzo

    2016-07-01

    Gait impairments are among the most disabling symptoms in several musculoskeletal and neurological conditions, severely limiting personal autonomy. Wearable gait sensors have been attracting attention as diagnostic tool for gait and are emerging as promising tool for tutoring and guiding gait execution. If their popularity is continuously growing, still there is room for improvement, especially towards more accurate solutions for spatio-temporal gait parameters estimation. We present an implementation of a zero-velocity-update gait analysis system based on a Kalman filter and off-the-shelf shoe-worn inertial sensors. The algorithms for gait events and step length estimation were specifically designed to comply with pathological gait patterns. More so, an Android app was deployed to support fully wearable and stand-alone real-time gait analysis. Twelve healthy subjects were enrolled to preliminarily tune the algorithms; afterwards sixteen persons with Parkinson's disease were enrolled for a validation study. Over the 1314 strides collected on patients at three different speeds, the total root mean square difference on step length estimation between this system and a gold standard was 2.9%. This shows that the proposed method allows for an accurate gait analysis and paves the way to a new generation of mobile devices usable anywhere for monitoring and intervention.

  7. Mantle Serpentinization near the Central Mariana Trench Constrained by Ocean Bottom Surface Wave Observations

    NASA Astrophysics Data System (ADS)

    Cai, C.; Wiens, D. A.; Lizarralde, D.; Eimer, M. O.; Shen, W.

    2017-12-01

    We investigate the crustal and uppermost mantle seismic structure across the Mariana trench by jointly inverting Rayleigh wave phase and group velocities from ambient noise and longer period phase velocities from Helmholtz tomography of teleseismic waveforms. We use data from a temporary deployment in 2012-2013, consisting of 7 island-based stations and 20 broadband ocean bottom seismographs, as well as data from the USGS Northern Mariana Islands Seismograph Network. To avoid any potential bias from the starting model, we use a Bayesian Monte-Carlo algorithm to invert for the azimuthally-averaged SV-wave velocity at each node. This method also allows us to apply prior constraints on crustal thickness and other parameters in a systematic way, and to derive formal estimates of velocity uncertainty. The results show the development of a low velocity zone within the incoming plate beginning about 80 km seaward of the trench axis, consistent with the onset of bending faults from bathymetry and earthquake locations. The maximum depth of the velocity anomaly increases towards the trench, and extends to about 30 km below the seafloor. The low velocities persist after the plate is subducted, as a 20-30 km thick low velocity layer with a somewhat smaller velocity reduction is imaged along the top of the slab beneath the forearc. An extremely low velocity zone is observed beneath the serpentine seamounts in the outer forearc, consistent with 40% serpentinization in the forearc mantle wedge. Azimuthal anisotropy results show trench parallel fast axis within the incoming plate at uppermost mantle depth (2%-4% anisotropy). All these observations suggest the velocity reduction in the incoming plate prior to subduction results from both serpentinized normal faults and water-filled cracks. Water is expelled from the cracks early in subduction, causing a modest increase in the velocity of the subducting mantle, and moves upward and causes serpentinization of the outer forearc. Assuming the velocity anomaly remaining in the subducting plate mantle is caused by serpentinization, calculations suggest the top 20 km of the slab mantle retains 10-15% serpentinization beyond the outer forearc. The amount of water carried into the deep mantle by this layer ( 54 Tg/Myr/m) is two to three times greater than previous estimates for the entire slab.

  8. A Centerless Circular Array Method: Extracting Maximal Information on Phase Velocities of Rayleigh Waves From Microtremor Records From a Simple Seismic Array

    NASA Astrophysics Data System (ADS)

    Cho, I.; Tada, T.; Shinozaki, Y.

    2005-12-01

    We have developed a Centerless Circular Array (CCA) method of microtremor exploration, an algorithm that enables to estimate phase velocities of Rayleigh waves by analyzing vertical-component records of microtremors that are obtained with an array of three or five seismic sensors placed around a circumference. Our CCA method shows a remarkably high performance in long-wavelength ranges because, unlike the frequency-wavenumber spectral method, our method does not resolve individual plane-wave components in the process of identifying phase velocities. Theoretical considerations predict that the resolving power of our CCA method in long-wavelength ranges depends upon the SN ratio, or the ratio of power of the propagating components to that of the non-propagating components (incoherent noise) contained in the records from the seismic array. The applicability of our CCA method to small-sized arrays on the order of several meters in radius has already been confirmed in our earlier work (Cho et al., 2004). We have deployed circular seismic arrays of different sizes at test sites in Japan where the underground structure is well documented through geophysical exploration, and have applied our CCA method to microtremor records to estimate phase velocities of Rayleigh waves. The estimates were then checked against "model" phase velocities that are derived from theoretical calculations. For arrays of 5, 25, 300 and 600 meters in radii, the estimated and model phase velocities demonstrated fine agreement within a broad wavelength range extending from a little larger than 3r (r: the array radius) up to at least 40r, 14r, 42r and 9r, respectively. This demonstrates the applicability of our CCA method to arrays on the order of several to several hundreds of meters in radii, and also illustrates, in a typical way, the markedly high performance of our CCA method in long-wavelength ranges. We have also invented a mathematical model that enables to evaluate the SN ratio in a given microtremor field, and have applied it to real data. Theory predicts that our CCA method underestimates the phase velocities when noise is present. Using the evaluated SN ratio and the phase velocity dispersion curve model, we have calculated the apparent values of phase velocities which theory expects should be obtained by our CCA method in long-wavelength ranges, and have confirmed that the outcome agreed very well with the phase velocities estimated from real data. This demonstrates that the mathematical assumptions, on which our CCA method relies, remains valid over a wide range of wavelengths which we are examining, and also implies that, even in the absence of a priori knowledge of the phase velocity dispersion curve, the SN ratio evaluated with our mathematical model could be used to identify the resolution limit of our CCA method in long-wavelength ranges. We have thus been able to demonstrate, on the basis of theoretical considerations and real data analysis, both the capabilities and limitations of our CCA method.

  9. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  10. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  11. Motion planning in velocity affine mechanical systems

    NASA Astrophysics Data System (ADS)

    Jakubiak, Janusz; Tchoń, Krzysztof; Magiera, Władysław

    2010-09-01

    We address the motion planning problem in specific mechanical systems whose linear and angular velocities depend affinely on control. The configuration space of these systems encompasses the rotation group, and the motion planning involves the system orientation. Derivation of the motion planning algorithm for velocity affine systems has been inspired by the continuation method. Performance of this algorithm is illustrated with examples of the kinematics of a serial nonholonomic manipulator, the plate-ball kinematics and the attitude control of a rigid body.

  12. Remote measurement of surface-water velocity using infrared videography and PIV: a proof-of-concept for Alaskan rivers

    USGS Publications Warehouse

    Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.; Conaway, Jeffrey S.

    2017-01-01

    Thermal cameras with high sensitivity to medium and long wavelengths can resolve features at the surface of flowing water arising from turbulent mixing. Images acquired by these cameras can be processed with particle image velocimetry (PIV) to compute surface velocities based on the displacement of thermal features as they advect with the flow. We conducted a series of field measurements to test this methodology for remote sensing of surface velocities in rivers. We positioned an infrared video camera at multiple stations across bridges that spanned five rivers in Alaska. Simultaneous non-contact measurements of surface velocity were collected with a radar gun. In situ velocity profiles were collected with Acoustic Doppler Current Profilers (ADCP). Infrared image time series were collected at a frequency of 10Hz for a one-minute duration at a number of stations spaced across each bridge. Commercial PIV software used a cross-correlation algorithm to calculate pixel displacements between successive frames, which were then scaled to produce surface velocities. A blanking distance below the ADCP prevents a direct measurement of the surface velocity. However, we estimated surface velocity from the ADCP measurements using a program that normalizes each ADCP transect and combines those normalized transects to compute a mean measurement profile. The program can fit a power law to the profile and in so doing provides a velocity index, the ratio between the depth-averaged and surface velocity. For the rivers in this study, the velocity index ranged from 0.82 – 0.92. Average radar and extrapolated ADCP surface velocities were in good agreement with average infrared PIV calculations.

  13. Optical Flow Experiments for Small-Body Navigation

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Kueppers, M.

    2012-09-01

    Optical Flow algorithms [1, 2] have been successfully used and been robustly implemented in many application domains from motion estimation to video compression. We argue that they also show potential for autonomous spacecraft payload operation around small solar system bodies, such as comets or asteroids. Operating spacecraft around small bodies in close distance provides numerous challenges, many of which are related to uncertainties in spacecraft position and velocity relative to a body. To make best use of usually scarce resource, it would be good to grant a certain amount of autonomy to a spacecraft, for example, to make time-critical decisions when to operate the payload. The Optical Flow describes is the apparent velocities of common, usually brightness-related features in at least two images. From it, one can make estimates about the spacecraft velocity and direction relative to the last manoeuvre or known state. The authors have conducted experiments with readily-available optical imagery using the relatively robust and well-known Lucas-Kanade method [3]; it was found to be applicable in a large number of cases. Since one of the assumptions is that the brightness of corresponding points in subsequent images does not change greatly, it is important that imagery is acquired at sensible intervals, during which illumination conditions can be assumed constant and the spacecraft does not move too far so that there is significant overlap. Full-frame Optical Flow can be computationally more expensive than image compression and usually focuses on movements of regions with significant brightness-gradients. However, given that missions which explore small bodies move at low relative velocities, computation time is not expected to be a limiting resource. Since there are now several missions which either have flown to small bodies or are planned to visit small bodies and stay there for some time, it shows potential to explore how instrument operations can benefit from the additional knowledge that is gained from analysing readily available data on-board. The algorithms for Optical Flow show the maturity that is necessary to be considered in safety-critical systems; their use can be complemented with shape models, pattern matching, housekeeping data and navigation techniques to obtain even more accurate information.

  14. Multitarget detection algorithm for automotive FMCW radar

    NASA Astrophysics Data System (ADS)

    Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun

    2012-06-01

    Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.

  15. Algorithms for Autonomous GPS Orbit Determination and Formation Flying: Investigation of Initialization Approaches and Orbit Determination for HEO

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)

    2002-01-01

    This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.

  16. Estimation of the atmosphere-ocean fluxes of greenhouse gases and aerosols at the finer resolution of the coastal ocean.

    NASA Astrophysics Data System (ADS)

    Vieira, Vasco; Sahlée, Erik; Jurus, Pavel; Clementi, Emanuela; Pettersson, Heidi; Mateus, Marcos

    2016-04-01

    The balances and fluxes of greenhouse gases and aerosols between atmosphere and ocean are fundamental for Earth's heat budget. Hence, the scientific community needs to know and simulate them with accuracy in order to monitor climate change from Earth-Observation satellites and to produce reliable estimates of climate change using Earth-System Models (ESM). So far, ESM have represented earth's surface with coarser resolutions so that each cell of the marine domain is dominated by the open ocean. In such case it is enough to use simple algorithms considering the wind speed 10m above sea-surface (u10) as sole driver of the gas transfer velocity. The formulation by Wanninkhof (1992) is broadly accepted as the best. However, the ESM community is becoming increasingly aware of the need to model with finer resolutions. Then, it is no longer enough to only consider u10 when modelling gas transfer velocities across the coastal oceans' surfaces. More comprehensive formulations are required that adjust better to local conditions by also accounting for the effects of sea-surface agitation, wave breaking, atmospheric stability of the Surface Boundary Layer, current drag with the bottom, surfactants and rain. Accurate algorithms are also fundamental to monitor atmosphere and ocean greenhouse gas concentrations using satellite data and reverse modelling. Past satellite missions ERS, Envisat, Jason-2, Aqua, Terra and Metop, have already been remotely sensing the ocean's surface at much finer resolutions than ESM using instruments like MERIS, MODIS, AMR, AATSR, MIPAS, Poseidon-3, SCIAMACHY, SeaWiFS, and IASI. The planned new satellite missions Sentinel-3, OCO-2 and GOSAT will further increase the resolutions. We developed a framework to congregate competing formulations for the estimation of the solubility and transfer velocity of virtually any gas on the biosphere taking into consideration the atmosphere and ocean fundamental variables and their derived geophysical processes mentioned above. First, we tested with measured data from the Baltic. Then, we adapted it to a coupler for atmosphere (WRF) and ocean (WW3-NEMO) model components and tested with simulated data relative to the Mediterranean and coastal North Atlantic. Computational speed was greatly improved by calculus vectorization and parallelization. The classical solubility formulation was compared to a recent alternative relying in a different chemistry background. Differences between solubility formulations resulted in a bias of 3.86×106 ton of CO2, 880.7 ton of CH4 and 401 ton of N2O dissolved in the first meter below the sea-surface of the modelled region, corresponding to 5.9% of the N2O yearly discharged by European estuaries. These differences concentrated in sensitive areas for Earth-System dynamics: the cooler polar waters and warmer less-saline coastal waters. The classical transfer velocity formulation using solely u10 was compared to alternatives using the friction velocity, atmospheric stability, sea-surface agitation and wave breaking. Differences between estimated transfer velocities concentrated at the coastal ocean and resulted in 55.82% of the gas volume transferred over the sea-surface of the modelled region during the 66h simulated period.

  17. Joint application of local earthquake tomography and Curie depth point analysis give evidence of magma presence below the geothermal field of Central Greece.

    NASA Astrophysics Data System (ADS)

    Karastathis, Vassilios; Papoulia, Joanna; di Fiore, Boris; Makris, Jannis; Tsambas, Anestis; Stampolidis, Alexandros; Papadopoulos, Gerassimos

    2010-05-01

    Along the coast of the North Evian Gulf, Central Greece, there are significant geothermal sites, thermal springs as Aedipsos, Yaltra, Lichades, Ilia, Kamena Vourla, Thermopylae etc. but also volcanoes of the Quaternary - Pleistocene age as Lichades and Vromolimni. Since for these local volcanoes and geothermal fields, their deep origin and their relation with the ones of the wider region have not been clarified yet in detail, we attempted a deep structure investigation by conducting a 3D local earthquake tomography study in combination with Curie Depth analysis from aeromagnetic data. A seismographic network of 23 portable land-stations and 7 OBS was deployed in the area of North Evian Gulf to record the microseismic activity for a 4-month period. Two thousand events were located with ML 0.7 to 4.5. To build the 3D seismic velocity structure for the investigation area, we implemented traveltime inversion with algorithm SIMULPS14 on the 540 best located events. The code performed simultaneous inversion of the model parameters Vp, Vp/Vs and hypocenter locations. In order to select a reliable 1D starting model for the tomography inversion, the seismic arrivals were inverted at first with the algorithm VELEST (minimum 1D velocity model). The values of the damping factor parameter were chosen with the aid of the trade-off curve between the model variance and data variance. Six horizontal slices of the 3D P-wave velocity model and the respective ones of the Poisson ratio are constructed. We also set a reliability limit on the sections based on the comparison between the graphical representations of the diagonal elements of the resolution matrix (RDE) and the recovery ability of "checkerboard" models. To estimate the Curie Depth Point we followed the centroid procedures so, the filtered residual dataset of the area was subdivided in 5 square subregions, named C1 up to C5, sized 90x90 km2 and overlapped each other by 70%. In each subregion the radially averaged power spectra was computed. The slope of the longest wavelength part for each subregion yield the centroid depth, zo, of the deepest layer of magnetic sources, while the slope of the second longest wavelength spectral segment yield the depth to the top, zt, for the some layer. Using the formula zb=2zo-zt the Curie Depth estimation was derived for each subregion C an assigned at its centre. The estimated depths are between 7 and 8.1 km below sea level. The results showed the existence of a low seismic velocity volume with high Poisson ratio at greater to 8 km depths. Since the Curie Depth Point analysis estimated the demagnetization of the material due to high temperatures at the top of this volume, we led to consider that this volume is related with the presence of a magma chamber. Below the sites of the quaternary volcanoes of Lichades, Vromolimni and Ag. Ioannis there is a local increase of the seismic velocity over the low velocity anomaly. This was attributed to a crystallized magma volume below the volcanoes. The coincidence of the spatial distribution of surface geothermal sites and volcanoes with the deep low velocity anomaly enhanced our consideration for magma presence at this anomaly. The seismic slices of 4 km depth showed that the supply of the thermal springs at the surface is related with the main faulted zones of the area.

  18. An interaction algorithm for prediction of mean and fluctuating velocities in two-dimensional aerodynamic wake flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Orzechowski, J. A.

    1980-01-01

    A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.

  19. Comparison of Ground- and Space-based Radar Observations with Disdrometer Measurements During the PECAN Field Campaign

    NASA Astrophysics Data System (ADS)

    Torres, A. D.; Rasmussen, K. L.; Bodine, D. J.; Dougherty, E.

    2015-12-01

    Plains Elevated Convection At Night (PECAN) was a large field campaign that studied nocturnal mesoscale convective systems (MCSs), convective initiation, bores, and low-level jets across the central plains in the United States. MCSs are responsible for over half of the warm-season precipitation across the central U.S. plains. The rainfall from deep convection of these systems over land have been observed to be underestimated by satellite radar rainfall-retrieval algorithms by as much as 40 percent. These algorithms have a strong dependence on the generally unmeasured rain drop-size distribution (DSD). During the campaign, our group measured rainfall DSDs, precipitation fall velocities, and total precipitation in the convective and stratiform regions of MCSs using Ott Parsivel optical laser disdrometers. The disdrometers were co-located with mobile pod units that measured temperature, wind, and relative humidity for quality control purposes. Data from the operational NEXRAD radar in LaCrosse, Wisconsin and space-based radar measurements from a Global Precipitation Measurement satellite overpass on July 13, 2015 were used for the analysis. The focus of this study is to compare DSD measurements from the disdrometers to radars in an effort to reduce errors in existing rainfall-retrieval algorithms. The error analysis consists of substituting measured DSDs into existing quantitative precipitation estimation techniques (e.g. Z-R relationships and dual-polarization rain estimates) and comparing these estimates to ground measurements of total precipitation. The results from this study will improve climatological estimates of total precipitation in continental convection that are used in hydrological studies, climate models, and other applications.

  20. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  1. Do Doppler color flow algorithms for mapping disturbed flow make sense?

    PubMed

    Gardin, J M; Lobodzinski, S M

    1990-01-01

    It has been suggested that a major advantage of Doppler color flow mapping is its ability to visualize areas of disturbed ("turbulent") flow, for example, in valvular stenosis or regurgitation and in shunts. To investigate how various color flow mapping instruments display disturbed flow information, color image processing was used to evaluate the most common velocity-variance color encoding algorithms of seven commercially available ultrasound machines. In six of seven machines, green was reportedly added by the variance display algorithms to map areas of disturbed flow. The amount of green intensity added to each pixel along the red and blue portions of the velocity reference color bar was calculated for each machine. In this study, velocities displayed on the reference color bar ranged from +/- 46 to +/- 64 cm/sec, depending on the Nyquist limit. Of note, changing the Nyquist limits depicted on the color reference bars did not change the distribution of the intensities of red, blue, or green within the contour of the reference map, but merely assigned different velocities to the pixels. Most color flow mapping algorithms in our study added increasing intensities of green to increasing positive (red) or negative (blue) velocities along their color reference bars. Most of these machines also added increasing green to red and blue color intensities horizontally across their reference bars as a marker of increased variance (spectral broadening). However, at any given velocity, marked variations were noted between different color flow mapping instruments in the amount of green added to their color velocity reference bars.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    PubMed

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  4. Data fusion for target tracking and classification with wireless sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2016-10-01

    In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  5. A support vector regression-firefly algorithm-based model for limiting velocity prediction in sewer pipes.

    PubMed

    Ebtehaj, Isa; Bonakdari, Hossein

    2016-01-01

    Sediment transport without deposition is an essential consideration in the optimum design of sewer pipes. In this study, a novel method based on a combination of support vector regression (SVR) and the firefly algorithm (FFA) is proposed to predict the minimum velocity required to avoid sediment settling in pipe channels, which is expressed as the densimetric Froude number (Fr). The efficiency of support vector machine (SVM) models depends on the suitable selection of SVM parameters. In this particular study, FFA is used by determining these SVM parameters. The actual effective parameters on Fr calculation are generally identified by employing dimensional analysis. The different dimensionless variables along with the models are introduced. The best performance is attributed to the model that employs the sediment volumetric concentration (C(V)), ratio of relative median diameter of particles to hydraulic radius (d/R), dimensionless particle number (D(gr)) and overall sediment friction factor (λ(s)) parameters to estimate Fr. The performance of the SVR-FFA model is compared with genetic programming, artificial neural network and existing regression-based equations. The results indicate the superior performance of SVR-FFA (mean absolute percentage error = 2.123%; root mean square error =0.116) compared with other methods.

  6. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Chand, Shyam; Minshull, Tim A.; Priest, Jeff A.; Best, Angus I.; Clayton, Christopher R. I.; Waite, William F.

    2006-08-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L-38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  7. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    USGS Publications Warehouse

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  8. A Comparison of 3D3C Velocity Measurement Techniques

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2013-11-01

    The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.

  9. Unsteady Aerodynamic Force Sensing from Strain Data

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2017-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.

  10. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  11. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  12. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  13. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-09-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.

  14. Surface wave tomography of the European crust and upper mantle from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    LU, Y.; Stehly, L.; Paul, A.

    2017-12-01

    We present a high-resolution 3-D Shear wave velocity model of the European crust and upper mantle derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous vertical-component seismic recordings from 1293 broadband stations across Europe (10W-35E, 30N-75N). We analyze group velocity dispersion from 5s to 150s for cross-correlations of more than 0.8 million virtual source-receiver pairs. 2-D group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. 3-D velocity model is obtained by merging 1-D models inverted at each pixel through a two-step data-driven inversion algorithm: a non-linear Bayesian Monte Carlo inversion, followed by a linearized inversion. Resulting S-wave velocity model and Moho depth are compared with previous geophysical studies: 1) The crustal model and Moho depth show striking agreement with active seismic imaging results. Moreover, it even provides new valuable information such as a strong difference of the European Moho along two seismic profiles in the Western Alps (Cifalps and ECORS-CROP). 2) The upper mantle model displays strong similarities with published models even at 150km deep, which is usually imaged using earthquake records.

  15. Muscle Velocity and Inertial Force from Phase Contrast Magnetic Resonance Imaging

    PubMed Central

    Wentland, Andrew L.; McWalter, Emily J.; Pal, Saikat; Delp, Scott L.; Gold, Garry E.

    2014-01-01

    Purpose To evaluate velocity waveforms in muscle and to create a tool and algorithm for computing and analyzing muscle inertial forces derived from 2D phase contrast (PC) MRI. Materials and Methods PC MRI was performed in the forearm of four healthy volunteers during 1 Hz cycles of wrist flexion-extension as well as in the lower leg of six healthy volunteers during 1 Hz cycles of plantarflexion-dorsiflexion. Inertial forces (F) were derived via the equation F = ma. The mass, m, was derived by multiplying voxel volume by voxel-by-voxel estimates of density via fat-water separation techniques. Acceleration, a, was obtained via the derivative of the PC MRI velocity waveform. Results Mean velocities in the flexors of the forearm and lower leg were 1.94 ± 0.97 cm/s and 5.57 ± 2.72 cm/s, respectively, as averaged across all subjects; the inertial forces in the flexors of the forearm and lower leg were 1.9 × 10-3 ± 1.3 × 10-3 N and 1.1 × 10-2 ± 6.1 × 10-3 N, respectively, as averaged across all subjects. Conclusion PC MRI provided a promising means of computing muscle velocities and inertial forces—providing the first method for quantifying inertial forces. PMID:25425185

  16. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  17. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  18. Wave Gradiometry for the Central U.S

    NASA Astrophysics Data System (ADS)

    liu, Y.; Holt, W. E.

    2013-12-01

    Wave gradiometry is a new technique utilizing the shape of seismic wave fields captured by USArray transportable stations to determine fundamental wave propagation characteristics. The horizontal and vertical wave displacements, spatial gradients and time derivatives of displacement are linearly linked by two coefficients which can be used to infer wave slowness, back azimuth, radiation pattern and geometrical spreading. The reducing velocity method from Langston [2007] is applied to pre-process our data. Spatial gradients of the shifted displacement fields are estimated using bi-cubic splines [Beavan and Haines, 2001]. Using singular value decomposition, the spatial gradients are then inverted to iteratively solve for wave parameters mentioned above. Numerical experiments with synthetic data sets provided by Princeton University's Neal Real Time Global Seismicity Portal are conducted to test the algorithm stability and evaluate errors. Our results based on real records in the central U.S. show that, the average Rayleigh wave phase velocity ranges from 3.8 to 4.2 km/s for periods from 60-125s, and 3.6 to 4.0 km/s for periods from 25-60s, which is consistent with earth model. Geometrical spreading and radiation pattern show similar features between different frequency bands. Azimuth variations are partially correlated with phase velocity change. Finally, we calculated waveform amplitude and spatial gradient uncertainties to determine formal errors in the estimated wave parameters. Further effort will be put into calculating shear wave velocity structure with respect to depth in the studied area. The wave gradiometry method is now being employed across the USArray using real observations and results obtained to date are for stations in eastern portion of the U.S. Rayleigh wave phase velocity derived from Aug, 20th, 2011 Vanuatu earthquake for periods from 100 - 125 s.

  19. Fast numerics for the spin orbit equation with realistic tidal dissipation and constant eccentricity

    NASA Astrophysics Data System (ADS)

    Bartuccelli, Michele; Deane, Jonathan; Gentile, Guido

    2017-08-01

    We present an algorithm for the rapid numerical integration of a time-periodic ODE with a small dissipation term that is C^1 in the velocity. Such an ODE arises as a model of spin-orbit coupling in a star/planet system, and the motivation for devising a fast algorithm for its solution comes from the desire to estimate probability of capture in various solutions, via Monte Carlo simulation: the integration times are very long, since we are interested in phenomena occurring on timescales of the order of 10^6-10^7 years. The proposed algorithm is based on the high-order Euler method which was described in Bartuccelli et al. (Celest Mech Dyn Astron 121(3):233-260, 2015), and it requires computer algebra to set up the code for its implementation. The payoff is an overall increase in speed by a factor of about 7.5 compared to standard numerical methods. Means for accelerating the purely numerical computation are also discussed.

  20. On using the Multiple Signal Classification algorithm to study microbaroms

    NASA Astrophysics Data System (ADS)

    Marcillo, O. E.; Blom, P. S.; Euler, G. G.

    2016-12-01

    Multiple Signal Classification (MUSIC) (Schmidt, 1986) is a well-known high-resolution algorithm used in array processing for parameter estimation. We report on the application of MUSIC to infrasonic array data in a study of the structure of microbaroms. Microbaroms can be globally observed and display energy centered around 0.2 Hz. Microbaroms are an infrasonic signal generated by the non-linear interaction of ocean surface waves that radiate into the ocean and atmosphere as well as the solid earth in the form of microseisms. Microbaroms sources are dynamic and, in many cases, distributed in space and moving in time. We assume that the microbarom energy detected by an infrasonic array is the result of multiple sources (with different back-azimuths) in the same bandwidth and apply the MUSIC algorithm accordingly to recover the back-azimuth and trace velocity of the individual components. Preliminary results show that the multiple component assumption in MUSIC allows one to resolve the fine structure in the microbarom band that can be related to multiple ocean surface phenomena.

  1. Investigation of new techniques for aircraft navigation using the omega navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1978-01-01

    An OMEGA navigation receiver with a microprocessor as the computational component was investigated. A version of the INTEL 4004 microprocessor macroassembler suitable for use on the CDC-6600 system and development of a FORTRAN IV simulator program for the microprocessor was developed. Supporting studies included development and evaluation of navigation algorithms to generate relative position information from OMEGA VLF phase measurements. Simulation studies were used to evaluate assumptions made in developing a navigation equation in OMEGA Line of Position (LOP) coordinates. Included in the navigation algorithms was a procedure for calculating a position in latitude/longitude given an OMEGA LOP fix. Implementation of a digital phase locked loop (DPLL) was evaluated on the basic of phase response characteristics over a range of input phase variations. Included also is an analytical evaluation on the basis of error probability of an algorithm for automatic time synchronization of the receiver to the OMEGA broadcast format. The use of actual OMEGA phase data and published propagation prediction corrections to determine phase velocity estimates was discussed.

  2. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Dongwoo; Lee, Eonseok; Choi, Young-Man

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accuratemore » as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm.« less

  3. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kang, Dongwoo; duk Kim, Young; Lee, Eonseok; Choi, Young-Man; Lee, Taik-Min; Kim, Dongmin

    2013-12-01

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accurate as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm.

  4. Shock wave propagation in layered planetary embryos

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Jafar; Ivanov, Boris A.

    2014-05-01

    The propagation of impact-induced shock wave inside a planetary embryo is investigated using the Hugoniot equations and a new scaling law, governing the particle velocity variations along a shock ray inside a spherical body. The scaling law is adopted to determine the impact heating of a growing embryo in its early stage when it is an undifferentiated and uniform body. The new scaling law, similar to other existing scaling laws, is not suitable for a large differentiated embryo consisting of a silicate mantle overlying an iron core. An algorithm is developed in this study on the basis of the ray theory in a spherically symmetric body which relates the shock parameters at the top of the core to those at the base of the mantle, thus enabling the adoption of scaling laws to estimate the impact heating of both the mantle and the core. The algorithm is applied to two embryo models: a simple two-layered model with a uniform mantle overlying a uniform core, and a model where the pre-shock density and acoustic velocity of the embryo are radially dependent. The former illustrates details of the particle velocity, shock pressure, and temperature increase behind the shock front in a 2D axisymmetric geometry. The latter provides a means to compare the results with those obtained by a hydrocode simulation. The agreement between the results of the two techniques in revealing the effects of the core-mantle boundary on the shock wave transmission across the boundary is encouraging.

  5. A probabilistic framework for single-station location of seismicity on Earth and Mars

    NASA Astrophysics Data System (ADS)

    Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.

    2017-01-01

    Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.

  6. Estimation of tunnel blockage from wall pressure signatures: A review and data correlation

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.

    1979-01-01

    A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.

  7. Acceleration estimation using a single GPS receiver for airborne scalar gravimetry

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohong; Zheng, Kai; Lu, Cuixian; Wan, Jiakuan; Liu, Zhanke; Ren, Xiaodong

    2017-11-01

    Kinematic acceleration estimated using the global positioning system (GPS) is significant for airborne scalar gravimetry. As the conventional approach based on the differential global positioning system (DGPS) presents several drawbacks, including additional cost or the impracticality of setting up nearby base stations in challenging environments, we introduce an alternative approach, Modified Kin-VADASE (MKin-VADASE), based on a modified Kin-VADASE approach without the requirement to have ground-base stations. In this approach, the aircraft velocities are first estimated with the modified Kin-VADASE. Then the accelerations are obtained from velocity estimates using the Taylor approximation differentiator. The impact of carrier-phase measurement noise and satellite ephemeris errors on acceleration estimates are investigated carefully in the frequency domain with the Fast Fourier Transform Algorithm (FFT). The results show that the satellite clock products have a significant impact on the acceleration estimates. Then, the performance of MKin-VADASE, PPP, and DGPS are validated using flight tests carried out in Shanxi Province, China. The accelerations are estimated using the three approaches, then used to calculate the gravity disturbances. Finally, the analysis of crossover difference and the terrestrial gravity data are used to evaluate the accuracy of gravity disturbance estimates. The results show that the performances of MKin-VADASE, PPP and DGPS are comparable, but the computational complexity of MKin-VADASE is greatly reduced with regard to PPP and DGPS. For the results of the three approaches, the RMS of crossover differences of gravity disturbance estimates is approximately 1-1.5 mGal at a spatial resolution of 3.5 km (half wavelength) after crossover adjustment, and the accuracy is approximately 3-4 mGal with respect to terrestrial gravity data.

  8. Velocity interferometer signal de-noising using modified Wiener filter

    NASA Astrophysics Data System (ADS)

    Rav, Amit; Joshi, K. D.; Roy, Kallol; Kaushik, T. C.

    2017-05-01

    The accuracy and precision of the non-contact velocity interferometer system for any reflector (VISAR) depends not only on the good optical design and linear optical-to- electrical conversion system, but also on accurate and robust post-processing techniques. The performance of these techniques, such as the phase unwrapping algorithm, depends on the signal-to-noise ratio (SNR) of the recorded signal. In the present work, a novel method of improving the SNR of the recorded VISAR signal, based on the knowledge of the noise characteristic of the signal conversion and recording system, is presented. The proposed method uses a modified Wiener filter, for which the signal power spectrum estimation is obtained using a spectral subtraction method (SSM), and the noise power spectrum estimation is obtained by taking the average of the recorded signal during the period when no target movement is expected. Since the noise power spectrum estimate is dynamic in nature, and obtained for each experimental record individually, the improved signal quality is high. The proposed method is applied to the simulated standard signals, and is not only found to be better than the SSM, but is also less sensitive to the selection of the noise floor during signal power spectrum estimation. Finally, the proposed method is applied to the recorded experimental signal and an improvement in the SNR is reported.

  9. Flight parameter estimation using instantaneous frequency and time delay measurements from a three-element planar acoustic array.

    PubMed

    Lo, Kam W

    2016-05-01

    The acoustic signal emitted by a turbo-prop aircraft consists of a strong narrowband tone superimposed on a broadband random component. A ground-based three-element planar acoustic array can be used to estimate the full set of flight parameters of a turbo-prop aircraft in transit by measuring the time delay (TD) between the signal received at the reference sensor and the signal received at each of the other two sensors of the array over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the reference sensor to improve the precision of the flight parameter estimates. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the aircraft velocity and altitude can be greatly reduced when IF measurements are used together with TD measurements. Two flight parameter estimation algorithms that utilize both IF and TD measurements are formulated and their performances are evaluated using both simulated and real data.

  10. The exploration technology and application of sea surface wave

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2016-12-01

    In order to investigate the seismic velocity structure of the shallow sediments in the Bohai Sea of China, we conduct a shear-wave velocity inversion of the surface wave dispersion data from a survey of 12 ocean bottom seismometers (OBS) and 377 shots of a 9000 inch3 air gun. With OBS station spacing of 5 km and air gun shot spacing of 190 m, high-quality Rayleigh wave data were recorded by the OBSs within 0.4 5 km offset. Rayleigh wave phase velocity dispersion for the fundamental mode and first overtone in the frequency band of 0.9 3.0 Hz were retrieved with the phase-shift method and inverted for the shear-wave velocity structure of the shallow sediments with a damped iterative least-square algorithm. Pseudo 2-D shear-wave velocity profiles with depth to 400 m show coherent features of relatively weak lateral velocity variation. The uncertainty in shear-wave velocity structure was also estimated based on the pseudo 2-D profiles from 6 trial inversions with different initial models, which suggest a velocity uncertainty < 30 m/s for most parts of the 2-D profiles. The layered structure with little lateral variation may be attributable to the continuous sedimentary environment in the Cenozoic sedimentary basin of the Bohai Bay basin. The shear-wave velocity of 200 300 m/s in the top 100 m of the Bohai Sea floor may provide important information for offshore site response studies in earthquake engineering. Furthermore, the very low shear-wave velocity structure (200 700 m/s) down to 400 m depth could produce a significant travel time delay of 1 s in the S wave arrivals, which needs to be considered to avoid serious bias in S wave traveltime tomographic models.

  11. Magnetic particle imaging for in vivo blood flow velocity measurements in mice

    NASA Astrophysics Data System (ADS)

    Kaul, Michael G.; Salamon, Johannes; Knopp, Tobias; Ittrich, Harald; Adam, Gerhard; Weller, Horst; Jung, Caroline

    2018-03-01

    Magnetic particle imaging (MPI) is a new imaging technology. It is a potential candidate to be used for angiographic purposes, to study perfusion and cell migration. The aim of this work was to measure velocities of the flowing blood in the inferior vena cava of mice, using MPI, and to evaluate it in comparison with magnetic resonance imaging (MRI). A phantom mimicking the flow within the inferior vena cava with velocities of up to 21 cm s‑1 was used for the evaluation of the applied analysis techniques. Time–density and distance–density analyses for bolus tracking were performed to calculate flow velocities. These findings were compared with the calibrated velocities set by a flow pump, and it can be concluded that velocities of up to 21 cm s‑1 can be measured by MPI. A time–density analysis using an arrival time estimation algorithm showed the best agreement with the preset velocities. In vivo measurements were performed in healthy FVB mice (n  =  10). MRI experiments were performed using phase contrast (PC) for velocity mapping. For MPI measurements, a standardized injection of a superparamagnetic iron oxide tracer was applied. In vivo MPI data were evaluated by a time–density analysis and compared to PC MRI. A Bland–Altman analysis revealed good agreement between the in vivo velocities acquired by MRI of 4.0  ±  1.5 cm s‑1 and those measured by MPI of 4.8  ±  1.1 cm s‑1. Magnetic particle imaging is a new tool with which to measure and quantify flow velocities. It is fast, radiation-free, and produces 3D images. It therefore offers the potential for vascular imaging.

  12. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    PubMed Central

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  13. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to be extracted from single-axis accelerometer data.

  14. Three Dimensional Gait Analysis Using Wearable Acceleration and Gyro Sensors Based on Quaternion Calculations

    PubMed Central

    Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki

    2013-01-01

    This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128

  15. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  16. Earthquake fracture energy inferred from kinematic rupture models on extended faults

    USGS Publications Warehouse

    Tinti, E.; Spudich, P.; Cocco, M.

    2005-01-01

    We estimate fracture energy on extended faults for several recent earthquakes by retrieving dynamic traction evolution at each point on the fault plane from slip history imaged by inverting ground motion waveforms. We define the breakdown work (Wb) as the excess of work over some minimum traction level achieved during slip. Wb is equivalent to "seismological" fracture energy (G) in previous investigations. Our numerical approach uses slip velocity as a boundary condition on the fault. We employ a three-dimensional finite difference algorithm to compute the dynamic traction evolution in the time domain during the earthquake rupture. We estimate Wb by calculating the scalar product between dynamic traction and slip velocity vectors. This approach does not require specifying a constitutive law and assuming dynamic traction to be collinear with slip velocity. If these vectors are not collinear, the inferred breakdown work depends on the initial traction level. We show that breakdown work depends on the square of slip. The spatial distribution of breakdown work in a single earthquake is strongly correlated with the slip distribution. Breakdown work density and its integral over the fault, breakdown energy, scale with seismic moment according to a power law (with exponent 0.59 and 1.18, respectively). Our estimates of breakdown work range between 4 ?? 105 and 2 ?? 107 J/m2 for earthquakes having moment magnitudes between 5.6 and 7.2. We also compare our inferred values with geologic surface energies. This comparison might suggest that breakdown work for large earthquakes goes primarily into heat production. Copyright 2005 by the American Geophysical Union.

  17. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  18. MALT90 Kinematic Distances to Dense Molecular Clumps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, J. Scott; Jackson, James M.; Sanhueza, Patricio

    Using molecular-line data from the Millimetre Astronomy Legacy Team 90 GHz Survey (MALT90), we have estimated kinematic distances to 1905 molecular clumps identified in the ATLASGAL 870 μ m continuum survey over the longitude range 295° <  l  < 350°. The clump velocities were determined using a flux-weighted average of the velocities obtained from Gaussian fits to the HCO{sup +}, HNC, and N{sub 2}H{sup +} (1–0) transitions. The near/far kinematic distance ambiguity was addressed by searching for the presence or absence of absorption or self-absorption features in 21 cm atomic hydrogen spectra from the Southern Galactic Plane Survey. Our algorithm provides anmore » estimation of the reliability of the ambiguity resolution. The Galactic distribution of the clumps indicates positions where the clumps are bunched together, and these locations probably trace the locations of spiral arms. Several clumps fall at the predicted location of the far side of the Scutum–Centaurus arm. Moreover, a number of clumps with positive radial velocities are unambiguously located on the far side of the Milky Way at galactocentric radii beyond the solar circle. The measurement of these kinematic distances, in combination with continuum or molecular-line data, now enables the determination of fundamental parameters such as mass, size, and luminosity for each clump.« less

  19. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  20. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  1. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  2. TH-A-BRF-08: Deformable Registration of MRI and CT Images for MRI-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wen, N; Gordon, J

    2014-06-15

    Purpose: To evaluate the quality of a commercially available MRI-CT image registration algorithm and then develop a method to improve the performance of this algorithm for MRI-guided prostate radiotherapy. Methods: Prostate contours were delineated on ten pairs of MRI and CT images using Eclipse. Each pair of MRI and CT images was registered with an intensity-based B-spline algorithm implemented in Velocity. A rectangular prism that contains the prostate volume was partitioned into a tetrahedral mesh which was aligned to the CT image. A finite element method (FEM) was developed on the mesh with the boundary constraints assigned from the Velocitymore » generated displacement vector field (DVF). The resultant FEM displacements were used to adjust the Velocity DVF within the prism. Point correspondences between the CT and MR images identified within the prism could be used as additional boundary constraints to enforce the model deformation. The FEM deformation field is smooth in the interior of the prism, and equal to the Velocity displacements at the boundary of the prism. To evaluate the Velocity and FEM registration results, three criteria were used: prostate volume conservation and center consistence under contour mapping, and unbalanced energy of their deformation maps. Results: With the DVFs generated by the Velocity and FEM simulations, the prostate contours were warped from MRI to CT images. With the Velocity DVFs, the prostate volumes changed 10.2% on average, in contrast to 1.8% induced by the FEM DVFs. The average of the center deviations was 0.36 and 0.27 cm, and the unbalance energy was 2.65 and 0.38 mJ/cc3 for the Velocity and FEM registrations, respectively. Conclusion: The adaptive FEM method developed can be used to reduce the error of the MIbased registration algorithm implemented in Velocity in the prostate region, and consequently may help improve the quality of MRI-guided radiation therapy.« less

  3. Estimation of wind stress using dual-frequency TOPEX data

    NASA Astrophysics Data System (ADS)

    Elfouhaily, Tanos; Vandemark, Douglas; Gourrion, Jéro‸me; Chapron, Bertrand

    1998-10-01

    The TOPEX/POSEIDON satellite carries the first dual-frequency radar altimeter. Monofrequency (Ku-band) algorithms are presently used to retrieve surface wind speed from the altimeter's radar cross-section measurement (σ0Ku). These algorithms work reasonably well, but it is also known that altimeter wind estimates can be contaminated by residual effects, such as sea state, embedded in the σ0Ku measurement. Investigating the potential benefit of using two frequencies for wind retrieval, it is shown that a simple evaluation of TOPEX data yields previously unavailable information, particularly for high and low wind speeds. As the wind speed increases, the dual-frequency data provides a measurement more directly linked to the short-scale surface roughness, which in turn is associated with the local surface wind stress. Using a global TOPEX σ0° data set and TOPEX's significant wave height (Hs) estimate as a surrogate for the sea state's degree of development, it is also shown that differences between the two TOPEX σ0 measurements strongly evidence nonlocal sea state signature. A composite scattering theory is used to show how the dual-frequency data can provide an improved friction velocity model, especially for winds above 7 m/s. A wind speed conversion is included using a sea state dependent drag coefficient fed with TOPEX Hs data. Two colocated TOPEX-buoy data sets (from the National Data Buoy Center (NDBC) and the Structure des Echanges Mer-Atmosphre, Proprietes des Heterogeneites Oceaniques: Recherche Expérimentale (SEMAPHORE) campaign) are employed to test the new wind speed algorithm. A measurable improvement in wind speed estimation is obtained when compared to the monofrequency Witter and Chelton [1991] model.

  4. GNSS/Electronic Compass/Road Segment Information Fusion for Vehicle-to-Vehicle Collision Avoidance Application

    PubMed Central

    Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto

    2017-01-01

    The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision. PMID:29186851

  5. GNSS/Electronic Compass/Road Segment Information Fusion for Vehicle-to-Vehicle Collision Avoidance Application.

    PubMed

    Sun, Rui; Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto

    2017-11-25

    The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision.

  6. The high performance parallel algorithm for Unified Gas-Kinetic Scheme

    NASA Astrophysics Data System (ADS)

    Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu

    2016-11-01

    A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.

  7. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  8. Computer coordination of limb motion for a three-legged walking robot

    NASA Technical Reports Server (NTRS)

    Klein, C. A.; Patterson, M. R.

    1980-01-01

    Coordination of the limb motion of a vehicle which could perform assembly and maintenance operations on large structures in space is described. Manipulator kinematics and walking robots are described. The basic control scheme of the robot is described. The control of the individual arms are described. Arm velocities are generally described in Cartesian coordinates. Cartesian velocities are converted to joint velocities using the Jacobian matrix. The calculation of a trajectory for an arm given a sequence of points through which it is to pass is described. The free gait algorithm which controls the lifting and placing of legs for the robot is described. The generation of commanded velocities for the robot, and the implementation of those velocities by the algorithm are discussed. Suggestions for further work in the area of robot legged locomotion are presented.

  9. System for Estimating Horizontal Velocity During Descent

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Cheng, Yang; Wilson, Reg; Goguen, Jay; Martin, Alejandro San; Leger, Chris; Matthies, Larry

    2007-01-01

    The descent image motion estimation system (DIMES) is a system of hardware and software, designed for original use in estimating the horizontal velocity of a spacecraft descending toward a landing on Mars. The estimated horizontal velocity is used in generating rocket-firing commands to reduce the horizontal velocity as part of an overall control scheme to minimize the landing impact. DIMES can also be used for estimating the horizontal velocity of a remotely controlled or autonomous aircraft for purposes of navigation and control.

  10. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

  11. Advanced Recording and Preprocessing of Physiological Signals. [data processing equipment for flow measurement of blood flow by ultrasonics

    NASA Technical Reports Server (NTRS)

    Bentley, P. B.

    1975-01-01

    The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.

  12. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  13. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  14. RatCar system for estimating locomotion states using neural signals with parameter monitoring: Vehicle-formed brain-machine interfaces for rat.

    PubMed

    Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    2008-01-01

    An online brain-machine interface (BMI) in the form of a small vehicle, the 'RatCar,' has been developed. A rat had neural electrodes implanted in its primary motor cortex and basal ganglia regions to continuously record neural signals. Then, a linear state space model represents a correlation between the recorded neural signals and locomotion states (i.e., moving velocity and azimuthal variances) of the rat. The model parameters were set so as to minimize estimation errors, and the locomotion states were estimated from neural firing rates using a Kalman filter algorithm. The results showed a small oscillation to achieve smooth control of the vehicle in spite of fluctuating firing rates with noises applied to the model. Major variation of the model variables converged in a first 30 seconds of the experiments and lasted for the entire one hour session.

  15. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  16. High Resolution Deformation Time Series Estimation for Distributed Scatterers Using Terrasar-X Data

    NASA Astrophysics Data System (ADS)

    Goel, K.; Adam, N.

    2012-07-01

    In recent years, several SAR satellites such as TerraSAR-X, COSMO-SkyMed and Radarsat-2 have been launched. These satellites provide high resolution data suitable for sophisticated interferometric applications. With shorter repeat cycles, smaller orbital tubes and higher bandwidth of the satellites; deformation time series analysis of distributed scatterers (DSs) is now supported by a practical data basis. Techniques for exploiting DSs in non-urban (rural) areas include the Small Baseline Subset Algorithm (SBAS). However, it involves spatial phase unwrapping, and phase unwrapping errors are typically encountered in rural areas and are difficult to detect. In addition, the SBAS technique involves a rectangular multilooking of the differential interferograms to reduce phase noise, resulting in a loss of resolution and superposition of different objects on ground. In this paper, we introduce a new approach for deformation monitoring with a focus on DSs, wherein, there is no need to unwrap the differential interferograms and the deformation is mapped at object resolution. It is based on a robust object adaptive parameter estimation using single look differential interferograms, where, the local tilts of deformation velocity and local slopes of residual DEM in range and azimuth directions are estimated. We present here the technical details and a processing example of this newly developed algorithm.

  17. Development of a Plantar Load Estimation Algorithm for Evaluation of Forefoot Load of Diabetic Patients during Daily Walks Using a Foot Motion Sensor

    PubMed Central

    Noguchi, Hiroshi; Sanada, Hiromi

    2017-01-01

    Forefoot load (FL) contributes to callus formation, which is one of the pathways to diabetic foot ulcers (DFU). In this study, we hypothesized that excessive FL, which cannot be detected by plantar load measurements within laboratory settings, occurs in daily walks. To demonstrate this, we created a FL estimation algorithm using foot motion data. Acceleration and angular velocity data were obtained from a motion sensor attached to each shoe of the subjects. The accuracy of the estimated FL was validated by correlation with the FL measured by force sensors on the metatarsal heads, which was assessed using the Pearson correlation coefficient. The mean of correlation coefficients of all the subjects was 0.63 at a level corridor, while it showed an intersubject difference at a slope and stairs. We conducted daily walk measurements in two diabetic patients, and additionally, we verified the safety of daily walk measurement using a wearable motion sensor attached to each shoe. We found that excessive FL occurred during their daily walks for approximately three hours in total, when any adverse event was not observed. This study indicated that FL evaluation method using wearable motion sensors was one of the promising ways to prevent DFUs. PMID:28840130

  18. Development of a Plantar Load Estimation Algorithm for Evaluation of Forefoot Load of Diabetic Patients during Daily Walks Using a Foot Motion Sensor.

    PubMed

    Watanabe, Ayano; Noguchi, Hiroshi; Oe, Makoto; Sanada, Hiromi; Mori, Taketoshi

    2017-01-01

    Forefoot load (FL) contributes to callus formation, which is one of the pathways to diabetic foot ulcers (DFU). In this study, we hypothesized that excessive FL, which cannot be detected by plantar load measurements within laboratory settings, occurs in daily walks. To demonstrate this, we created a FL estimation algorithm using foot motion data. Acceleration and angular velocity data were obtained from a motion sensor attached to each shoe of the subjects. The accuracy of the estimated FL was validated by correlation with the FL measured by force sensors on the metatarsal heads, which was assessed using the Pearson correlation coefficient. The mean of correlation coefficients of all the subjects was 0.63 at a level corridor, while it showed an intersubject difference at a slope and stairs. We conducted daily walk measurements in two diabetic patients, and additionally, we verified the safety of daily walk measurement using a wearable motion sensor attached to each shoe. We found that excessive FL occurred during their daily walks for approximately three hours in total, when any adverse event was not observed. This study indicated that FL evaluation method using wearable motion sensors was one of the promising ways to prevent DFUs.

  19. SU-E-J-97: Quality Assurance of Deformable Image Registration Algorithms: How Realistic Should Phantoms Be?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Stathakis, S; Kirby, N

    Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realisticmore » noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.« less

  20. Algorithm for automatic analysis of electro-oculographic data

    PubMed Central

    2013-01-01

    Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372

  1. Algorithm for automatic analysis of electro-oculographic data.

    PubMed

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  2. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography

    PubMed Central

    2013-01-01

    Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young’s modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc. PMID:23937814

  3. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography.

    PubMed

    Lu, Minhua; Zhang, Heye; Wang, Jun; Yuan, Jinwei; Hu, Zhenghui; Liu, Huafeng

    2013-08-10

    The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young's modulus are computed simultaneously through an extended Kalman filter (EKF). In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young's modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young's modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc.

  4. Seismic structure of the upper crust in the Albertine Rift from travel-time and ambient-noise tomography - a comparison

    NASA Astrophysics Data System (ADS)

    Jakovlev, Andrey; Kaviani, Ayoub; Ruempker, Georg

    2017-04-01

    Here we present results of the investigation of the upper crust in the Albertine rift around the Rwenzori Mountains. We use a data set collected from a temporary network of 33 broadband stations operated by the RiftLink research group between September 2009 and August 2011. During this period, 82639 P-wave and 73408 S-wave travel times from 12419 local and regional earthquakes were registered. This presents a very rare opportunity to apply both local travel-time and ambient-noise tomography to analyze data from the same network. For the local travel-time tomographic inversion the LOTOS algorithm (Koulakov, 2009) was used. The algorithm performs iterative simultaneous inversions for 3D models of P- and S-velocity anomalies in combination with earthquake locations and origin times. 28955 P- and S-wave picks from 2769 local earthquakes were used. To estimate the resolution and stability of the results a number of the synthetic and real data tests were performed. To perform the ambient noise tomography we use the following procedure. First, we follow the standard procedure described by Bensen et al. (2007) as modified by Boué et al. (2014) to compute the vertical component cross-correlation functions between all pairs of stations. We also adapted the algorithm introduced by Boué et al. (2014) and use the WHISPER software package (Briand et al., 2013) to preprocess individual daily vertical-component waveforms. On the next step, for each period, we use the method of Barmin et al. (2001) to invert the dispersion measurements along each path for group velocity tomographic maps. Finally, we adapt a modified version of the algorithm suggested by Macquet et al. (2014) to invert the group velocity maps for shear velocity structure. We apply several tests, which show that the best resolution is obtained at a period of 8 seconds, which correspond to a depth of approximately 6 km. Models of the seismic structure obtained by the two methods agree well at shallow depth of about 5 km Low velocities surround the mountain range from western and southern sides and coincide with the location of the rift valley. The Rwenzori Mountains itself and the eastern rift shoulder are represented by increased velocities. At greater depths of 10 - 15 km some differences in the models care observed. Thus, beneath the Rwenzories the travel time tomography shows low S-velocities, whereas the ambient noise tomography exhibits high S-velocities. This can be possibly explained by the fact that the ambient noise tomography is characterized by higher vertical resolution. Also, the number of the rays used for tomographic inversion in the ambient noise tomography is significantly smaller. This study was partly supported by the grant of Russian Foundation of Science #14-17-00430. References: Barmin, M.P., Ritzwoller, M.H. & Levshin, A.L., 2001. A fast and reliable method for surface wave tomography, Pure appl. Geophys., 158, 1351-1375. Bensen G.D., Ritzwoller M.H., Barmin M.P., Levshin A.L., Lin F., Moschetti M.P., Shapiro N.M., Yang Y., 2001, A fast and reliable method for surface wave tomography. Geophys. J. Int. 169, 1239-1260, doi: 10.1111/j.1365-246X.2007.03374.x. Boué P., Poli P., Campillo M., Roux P., 2014, Reverberations, coda waves and ambient-noise: correlations at the global scale and retrieval of the deep phases. Earth planet. Sci. Lett., 391, 137-145. Briand X., Campillo M., Brenguier F., Boué P., Poli P., Roux P., Takeda T. AGU Fall Meeting. San Francisco, CA; 2013. Processing of terabytes of data for seismic noise analysis with the Python codes of the Whisper Suite. 9-13 December, in Proceedings of the , Abstract n°IN51B-1544. Koulakov, I. (2009), LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. Seismol. Soc. Am., 99, 194-214, doi:10.1785/0120080013.

  5. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  6. A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting

    2017-10-01

    The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.

  7. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  8. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  9. Model-Based Estimation of Ankle Joint Stiffness

    PubMed Central

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  10. Radiofrequency electrode vibration-induced shear wave imaging for tissue modulus estimation: a simulation study.

    PubMed

    Bharat, Shyam; Varghese, Tomy

    2010-10-01

    Quasi-static electrode displacement elastography, used for in-vivo imaging of radiofrequency ablation-induced lesions in abdominal organs such as the liver and kidney, is extended in this paper to dynamic vibrational perturbations of the ablation electrode. Propagation of the resulting shear waves into adjoining regions of tissue can be tracked and the shear wave velocity used to quantify the shear (and thereby Young's) modulus of tissue. The algorithm used utilizes the time-to-peak displacement data (obtained from finite element analyses) to calculate the speed of shear wave propagation in the material. The simulation results presented illustrate the feasibility of estimating the Young's modulus of tissue and is promising for characterizing the stiffness of radiofrequency-ablated thermal lesions and surrounding normal tissue.

  11. Common reflection point migration and velocity analysis for anisotropic media

    NASA Astrophysics Data System (ADS)

    Oropeza, Ernesto V.

    An efficient Kirchhoff-style prestack depth migration, called 'parsimonious' migration was developed a decade ago for isotropic 2D and 3D media. The common-reflection point (CRP) migration velocity analysis (MVA) was developed later for isotropic media. The isotropic parsimonious migration produces incorrect images when the media is actually anisotropic. Similarly, isotropic CRP MVA produces incorrect inversions when the medium is anisotropic. In this study both parsimonious depth migration and common-reflection point migration velocity analysis are extended for application to 2D tilted transversely isotropic (TTI) media and illustrated with synthetic P-wave data. While the framework of isotropic parsimonious migration may be retained, the extension to TTI media requires redevelopment of each of the numerical components, including calculation of the phase and group velocity for TTI media, development of a new two-point anisotropic ray tracer, and substitution of an initial-angle and anisotropic shooting ray-trace algorithm to replace the isotropic one. The 2D model parameterization consists of Thomsen's parameters (Vpo, epsilon, delta) and the tilt angle of the symmetry axis of the TI medium. The parsimonious anisotropic migration algorithm is successfully applied to synthetic data from a TTI version of the Marmousi-2 model. The quality of the image improves by weighting the impulse response by the calculation of the anisotropic Fresnel radius. The accuracy and speed of this migration makes it useful for anisotropic velocity model building. The common-reflection point migration velocity analysis for TTI media for P-waves includes (and inverts for) Vpo, epsilon, and delta. The orientation of the anisotropic symmetry axis have to be constrained. If it constrained orthogonal to the layer bottom (as it conventionally is), it is estimated at each CRP and updated at each iteration without intermediate picking. The extension to TTI media requires development of a new inversion procedure to include Vpo, epsilon, and delta in the perturbations. The TTI CRP MVA is applied to a single layer to demonstrate its feasibility. Errors in the estimation of the orientation of the symmetry axis larger that 5 degrees affect the inversion of epsilon and delta while Vpo is less sensitive to this parameter. The TTI CRP MVA is also applied to a version of the TTI BP model by layer stripping so one group of CRPs are used do to inversion top to bottom, constraining the model parameter after each previous group of CRPs converges. Vpo, delta and the orientation of the anisotropic symmetry axis (constrained orthogonal to the local reflector orientation) are successfully inverted. epsilon is less well constrained by the small acquisition aperture in the data .

  12. Passive imaging of hydrofractures in the South Belridge diatomite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilderton, D.C.; Patzek, T.W.; Rector, J.W.

    1996-03-01

    The authors present the results of a seismic analysis of two hydrofractures spanning the entire diatomite column (1,110--1,910 ft or 338--582 m) in Shell`s Phase 2 steam drive pilot in South Belridge, California. These hydrofractures were induced at two depths (1,110--1,460 and 1,560--1,910 ft) and imaged passively using the seismic energy released during fracturing. The arrivals of shear waves from the cracking rock (microseismic events) were recorded at a 1 ms sampling rate by 56 geophones in three remote observation wells, resulting in 10 GB of raw data. These arrival times were then inverted for the event locations, from whichmore » the hydrofracture geometry was inferred. A five-dimensional conjugate-gradient algorithm with a depth-dependent, but otherwise constant shear wave velocity model (CVM) was developed for the inversions. To validate CVM, they created a layered shear wave velocity model of the formation and used it to calculate synthetic arrival times from known locations chosen at various depths along the estimated fracture plane. These arrival times were then inverted with CVM and the calculated locations compared with the known ones, quantifying the systematic error associated with the assumption of constant shear wave velocity. They also performed Monte Carlo sensitivity analyses on the synthetic arrival times to account for all other random errors that exist in field data. After determining the limitations of the inversion algorithm, they hand-picked the shear wave arrival times for both hydrofractures and inverted them with CVM.« less

  13. Parameterizations of Dry Deposition for the Industrial Source Complex Model

    NASA Astrophysics Data System (ADS)

    Wesely, M. L.; Doskey, P. V.; Touma, J. S.

    2002-05-01

    Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.

  14. Ionosphere Threat Model Investigations by Using Turkish National Permanent GPS Network

    NASA Astrophysics Data System (ADS)

    Köroǧlu, Meltem; Arikan, Feza; Koroglu, Ozan

    2016-07-01

    Global Positioning System (GPS) signal realibity may decrease significantly due to the variable electron density structure of ionosphere. In the literature, ionospheric disturbance is modeled as a linear semi-definite wave which has width, gradient and a constant velocity. To provide precise positioning, Ground Based Augmentation Systems (GBAS) are used. GBAS collects all measurements from GPS network receivers and computes an integrity level for the measurement by comparing the network GPS receivers measurements with the threat models of ionosphere. Threat models are computed according to ionosphere gradient characteristics. Gradient is defined as the difference of slant delays between the receivers. Slant delays are estimated from the STEC (Slant Total Electron Content) values of the ionosphere that is given by the line integral of the electron density between the receiver and GPS satellite. STEC can be estimated over Global Navigation Satellite System (GNSS) signals by using IONOLAB-STEC and IONOLAB-BIAS algorithms. Since most of the ionospheric disturbance observed locally, threat models for the GBAS systems must be extracted as locally. In this study, an automated ionosphere gradient estimation algorithm was developed by using Turkish National Permanent GPS Network (TNPGN-Active) data for year 2011. The GPS receivers are grouped within 150 km radius. For each region, for each day and for each satellite all STEC values are estimated by using IONOLAB-STEC and IONOLAB-BIAS softwares (www.ionolab.org). In the gradient estimation, station-pair method is used. Statistical properties of the valid gradients are extracted as tables for each region, day and satellite. By observing the histograms of the maximum gradients and standard deviations of the gradients with respect to the elevation angle for each day, the anomalies and disturbances of the ionosphere can be detected. It is observed that, maximum gradient estimates are less than 40 mm/km and maximum standard deviation of the gradients are observed as 5 mm/km. In the stormy days, the level of gradients and the standard deviation values becomes larger than those of quiet days. These observations may also form a basis for the estimationof velocity and width of the traveling ionospheric disturbances. The study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  15. Aging persons' estimates of vehicular motion.

    PubMed

    Schiff, W; Oldak, R; Shah, V

    1992-12-01

    Estimated arrival times of moving autos were examined in relation to viewer age, gender, motion trajectory, and velocity. Direct push-button judgments were compared with verbal estimates derived from velocity and distance, which were based on assumptions that perceivers compute arrival time from perceived distance and velocity. Experiment 1 showed that direct estimates of younger Ss were most accurate. Older women made the shortest (highly cautious) estimates of when cars would arrive. Verbal estimates were much lower than direct estimates, with little correlation between them. Experiment 2 extended target distances and velocities of targets, with the results replicating the main findings of Experiment 1. Judgment accuracy increased with target velocity, and verbal estimates were again poorer estimates of arrival time than direct ones, with different patterns of findings. Using verbal estimates to approximate judgments in traffic situations appears questionable.

  16. Development of an FBG Sensor Array for Multi-Impact Source Localization on CFRP Structures.

    PubMed

    Jiang, Mingshun; Sai, Yaozhang; Geng, Xiangyi; Sui, Qingmei; Liu, Xiaohui; Jia, Lei

    2016-10-24

    We proposed and studied an impact detection system based on a fiber Bragg grating (FBG) sensor array and multiple signal classification (MUSIC) algorithm to determine the location and the number of low velocity impacts on a carbon fiber-reinforced polymer (CFRP) plate. A FBG linear array, consisting of seven FBG sensors, was used for detecting the ultrasonic signals from impacts. The edge-filter method was employed for signal demodulation. Shannon wavelet transform was used to extract narrow band signals from the impacts. The Gerschgorin disc theorem was used for estimating the number of impacts. We used the MUSIC algorithm to obtain the coordinates of multi-impacts. The impact detection system was tested on a 500 mm × 500 mm × 1.5 mm CFRP plate. The results show that the maximum error and average error of the multi-impacts' localization are 9.2 mm and 7.4 mm, respectively.

  17. Identification of a parametric, discrete-time model of ankle stiffness.

    PubMed

    Guarin, Diego L; Jalaleddini, Kian; Kearney, Robert E

    2013-01-01

    Dynamic ankle joint stiffness defines the relationship between the position of the ankle and the torque acting about it and can be separated into intrinsic and reflex components. Under stationary conditions, intrinsic stiffness can described by a linear second order system while reflex stiffness is described by Hammerstein system whose input is delayed velocity. Given that reflex and intrinsic torque cannot be measured separately, there has been much interest in the development of system identification techniques to separate them analytically. To date, most methods have been nonparametric and as a result there is no direct link between the estimated parameters and those of the stiffness model. This paper presents a novel algorithm for identification of a discrete-time model of ankle stiffness. Through simulations we show that the algorithm gives unbiased results even in the presence of large, non-white noise. Application of the method to experimental data demonstrates that it produces results consistent with previous findings.

  18. Evaluation of the tablets' surface flow velocities in pan coaters.

    PubMed

    Dreu, Rok; Toschkoff, Gregor; Funke, Adrian; Altmeyer, Andreas; Knop, Klaus; Khinast, Johannes; Kleinebudde, Peter

    2016-09-01

    The tablet pan coating process involves various types of transverse tablet bed motions, ranging from rolling to cascading. To preserve satisfactory results in terms of coating quality after scale-up, understanding the dynamics of pan coating process should be achieved. The aim of this study was to establish a methodology of estimating translational surface velocities of the tablets in a pan coater and to assess their dependence on the drum's filling degree, the pan speed, the presence of baffles and the selected tablet properties in a dry bed system and during coating while varying the drum's filling degree and the pan speed. Experiments were conducted on the laboratory scale and on the pilot scale in side-vented pan coaters. Surface movement of biconvex two-layer tablets was assessed before, during and after the process of active coating. In order to determine the tablets' surface flow velocities, a high-speed video of the tablet surface flow was recorded via a borescope inserted into the coating drum and analysed via a cross-correlation algorithm. The obtained tablet velocity data were arranged in a linear fashion as a function of the coating drum's radius and frequency. Velocity data obtained during coating were close to those of dry tablets after coating. The filling degree had little influence on the tablet velocity profile in a coating drum with baffles but clearly affected it in a coating drum without baffles. In most but not all cases, tablets with a lower static angle of repose had tablet velocity profiles with lower slopes than tablets with higher inter-tablet friction. This particular tablet velocity response can be explained by case specific values of tablet bed's dynamic angle of repose. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Ship-based Observations of Turbulence and Stratocumulus Cloud Microphysics in the SE Pacific Ocean from the VOCALS Field Program

    NASA Astrophysics Data System (ADS)

    Fairall, C. W.; Williams, C.; Grachev, A. A.; Brewer, A.; Choukulkar, A.

    2013-12-01

    The VAMOS (VOCALS) field program involved deployment of several measurement systems based on ships, land and aircraft over the SE Pacific Ocean. The NOAA Ship Ronald H. Brown was the primary platform for surface based measurements which included the High Resolution Doppler Lidar (HRDL) and the motion-stabilized 94-GHz cloud Doppler radar (W-band radar). In this paper, the data from the W-band radar will be used to study the turbulent and microphysical structure of the stratocumulus clouds prevalent in the region. The radar data consists of a 3 Hz time series of radar parameters (backscatter coefficient, mean Doppler shift, and Doppler width) at 175 range gates (25-m spacing). Several statistical methods to de-convolve the turbulent velocity and gravitational settling velocity are examined and an optimized algorithm is developed. 20 days of observations are processed to examine in-cloud profiles of mean turbulent statistics (vertical velocity variance, skewness, dissipation rate) in terms of surface fluxes and estimates of entrainment and cloudtop radiative cooling. The clean separation of turbulent and fall velocities will allow us to compute time-averaged drizzle-drop size spectra within and below the cloud that are significantly superior to previous attempts with surface-based marine cloud radar observations.

  20. Density reconstruction in multiparameter elastic full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Sun, Min'ao; Yang, Jizhong; Dong, Liangguo; Liu, Yuzhu; Huang, Chao

    2017-12-01

    Elastic full-waveform inversion (EFWI) is a quantitative data fitting procedure that recovers multiple subsurface parameters from multicomponent seismic data. As density is involved in addition to P- and S-wave velocities, the multiparameter EFWI suffers from more serious tradeoffs. In addition, compared with P- and S-wave velocities, the misfit function is less sensitive to density perturbation. Thus, a robust density reconstruction remains a difficult problem in multiparameter EFWI. In this paper, we develop an improved scattering-integral-based truncated Gauss-Newton method to simultaneously recover P- and S-wave velocities and density in EFWI. In this method, the inverse Gauss-Newton Hessian has been estimated by iteratively solving the Gauss-Newton equation with a matrix-free conjugate gradient algorithm. Therefore, it is able to properly handle the parameter tradeoffs. To give a detailed illustration of the tradeoffs between P- and S-wave velocities and density in EFWI, wavefield-separated sensitivity kernels and the Gauss-Newton Hessian are numerically computed, and their distribution characteristics are analyzed. Numerical experiments on a canonical inclusion model and a modified SEG/EAGE Overthrust model have demonstrated that the proposed method can effectively mitigate the tradeoff effects, and improve multiparameter gradients. Thus, a high convergence rate and an accurate density reconstruction can be achieved.

  1. A parabolic velocity-decomposition method for wind turbines

    NASA Astrophysics Data System (ADS)

    Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.

    2017-02-01

    An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.

  2. EXOFIT: orbital parameters of extrasolar planets from radial velocities

    NASA Astrophysics Data System (ADS)

    Balan, Sreekumar T.; Lahav, Ofer

    2009-04-01

    Retrieval of orbital parameters of extrasolar planets poses considerable statistical challenges. Due to sparse sampling, measurement errors, parameters degeneracy and modelling limitations, there are no unique values of basic parameters, such as period and eccentricity. Here, we estimate the orbital parameters from radial velocity data in a Bayesian framework by utilizing Markov Chain Monte Carlo (MCMC) simulations with the Metropolis-Hastings algorithm. We follow a methodology recently proposed by Gregory and Ford. Our implementation of MCMC is based on the object-oriented approach outlined by Graves. We make our resulting code, EXOFIT, publicly available with this paper. It can search for either one or two planets as illustrated on mock data. As an example we re-analysed the orbital solution of companions to HD 187085 and HD 159868 from the published radial velocity data. We confirm the degeneracy reported for orbital parameters of the companion to HD 187085, and show that a low-eccentricity orbit is more probable for this planet. For HD 159868, we obtained slightly different orbital solution and a relatively high `noise' factor indicating the presence of an unaccounted signal in the radial velocity data. EXOFIT is designed in such a way that it can be extended for a variety of probability models, including different Bayesian priors.

  3. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea nuclear tests but, due to signal incoherence, failed to contribute to the automatic event detections. It is demonstrated that the smoothed incoherent slowness estimates for the MJAR Pn phases for both tests indicate unambiguously the correct type of phase and a backazimuth estimate within 5 degrees of the great-circle backazimuth. The detection part of the algorithm is applicable to all IMS arrays, and spectrogram-based processing may offer a reduction in the false alarm rate for high frequency signals. Significantly, the local maxima of the scalar functions derived from the transformed spectrogram beams provide good estimates of the signal onset time. High frequency energy is of greater significance for lower event magnitudes and in, for example, the cavity decoupling detection evasion scenario. There is a need to characterize propagation paths with low attenuation of high frequency energy and situations in which parameter estimation on array stations fails.

  4. Snowfall Rate Retrieval Using Passive Microwave Measurements and Its Applications in Weather Forecast and Hydrology

    NASA Technical Reports Server (NTRS)

    Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Yan, Banghua; Zavodsky, Bradley; Zhao, Limin; Dong, Jun; Wang, Nai-Yu

    2015-01-01

    (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has also been developed. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. It employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derives the probability of snowfall. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model. A method adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. The SFR products are being used mainly in two communities: hydrology and weather forecast. Global blended precipitation products traditionally do not include snowfall derived from satellites because such products were not available operationally in the past. The ATMS and AMSU/MHS SFR now provide the winter precipitation information for these blended precipitation products. Weather forecasters mainly rely on radar and station observations for snowfall forecast. The SFR products can fill in gaps where no conventional snowfall data are available to forecasters. The products can also be used to confirm radar and gauge snowfall data and increase forecasters' confidence in their prediction.

  5. NPP ATMS Snowfall Rate Product

    NASA Technical Reports Server (NTRS)

    Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua

    2015-01-01

    Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2015). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. NCEP CMORPH analysis has shown that integration of ATMS SFR has improved the performance of CMORPH-Snow. The ATMS SFR product is also being assessed at several NWS Weather Forecast Offices for its usefulness in weather forecast.

  6. Continuous Data Assimilation for a 2D Bénard Convection System Through Horizontal Velocity Measurements Alone

    NASA Astrophysics Data System (ADS)

    Farhat, Aseel; Lunasin, Evelyn; Titi, Edriss S.

    2017-06-01

    In this paper we propose a continuous data assimilation (downscaling) algorithm for a two-dimensional Bénard convection problem. Specifically we consider the two-dimensional Boussinesq system of a layer of incompressible fluid between two solid horizontal walls, with no-normal flow and stress-free boundary conditions on the walls, and the fluid is heated from the bottom and cooled from the top. In this algorithm, we incorporate the observables as a feedback (nudging) term in the evolution equation of the horizontal velocity. We show that under an appropriate choice of the nudging parameter and the size of the spatial coarse mesh observables, and under the assumption that the observed data are error free, the solution of the proposed algorithm converges at an exponential rate, asymptotically in time, to the unique exact unknown reference solution of the original system, associated with the observed data on the horizontal component of the velocity.

  7. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  8. WSR-88D doppler radar detection of corn earworm moth migration.

    PubMed

    Westbrook, J K; Eyster, R S; Wolf, W W

    2014-07-01

    Corn earworm (Lepidoptera: Noctuidae) (CEW) populations infesting one crop production area may rapidly migrate and infest distant crop production areas. Although entomological radars have detected corn earworm moth migrations, the spatial extent of the radar coverage has been limited to a small horizontal view above crop production areas. The Weather Service Radar (version 88D) (WSR-88D) continuously monitors the radar-transmitted energy reflected by, and radial speed of, biota as well as by precipitation over areas that may encompass crop production areas. We analyzed data from the WSR-88D radar (S-band) at Brownsville, Texas, and related these data to aerial concentrations of CEW estimated by a scanning entomological radar (X-band) and wind velocity measurements from rawinsonde and pilot balloon ascents. The WSR-88D radar reflectivity was positively correlated (r2=0.21) with the aerial concentration of corn earworm-size insects measured by a scanning X-band radar. WSR-88D radar constant altitude plan position indicator estimates of wind velocity were positively correlated with wind speed (r2=0.56) and wind direction (r2=0.63) measured by pilot balloons and rawinsondes. The results reveal that WSR-88D radar measurements of insect concentration and displacement speed and direction can be used to estimate the migratory flux of corn earworms and other nocturnal insects, information that could benefit areawide pest management programs. In turn, identification of the effects of spatiotemporal patterns of migratory flights of corn earworm-size insects on WSR-88D radar measurements may lead to the development of algorithms that increase the accuracy of WSR-88D radar measurements of reflectivity and wind velocity for operational meteorology.

  9. WSR-88D doppler radar detection of corn earworm moth migration

    NASA Astrophysics Data System (ADS)

    Westbrook, J. K.; Eyster, R. S.; Wolf, W. W.

    2014-07-01

    Corn earworm (Lepidoptera: Noctuidae) (CEW) populations infesting one crop production area may rapidly migrate and infest distant crop production areas. Although entomological radars have detected corn earworm moth migrations, the spatial extent of the radar coverage has been limited to a small horizontal view above crop production areas. The Weather Service Radar (version 88D) (WSR-88D) continuously monitors the radar-transmitted energy reflected by, and radial speed of, biota as well as by precipitation over areas that may encompass crop production areas. We analyzed data from the WSR-88D radar (S-band) at Brownsville, Texas, and related these data to aerial concentrations of CEW estimated by a scanning entomological radar (X-band) and wind velocity measurements from rawinsonde and pilot balloon ascents. The WSR-88D radar reflectivity was positively correlated ( r 2 = 0.21) with the aerial concentration of corn earworm-size insects measured by a scanning X-band radar. WSR-88D radar constant altitude plan position indicator estimates of wind velocity were positively correlated with wind speed ( r 2 = 0.56) and wind direction ( r 2 = 0.63) measured by pilot balloons and rawinsondes. The results reveal that WSR-88D radar measurements of insect concentration and displacement speed and direction can be used to estimate the migratory flux of corn earworms and other nocturnal insects, information that could benefit areawide pest management programs. In turn, identification of the effects of spatiotemporal patterns of migratory flights of corn earworm-size insects on WSR-88D radar measurements may lead to the development of algorithms that increase the accuracy of WSR-88D radar measurements of reflectivity and wind velocity for operational meteorology.

  10. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.

    PubMed

    Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun

    2017-09-01

    The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.

  11. A hierarchical framework for air traffic control

    NASA Astrophysics Data System (ADS)

    Roy, Kaushik

    Air travel in recent years has been plagued by record delays, with over $8 billion in direct operating costs being attributed to 100 million flight delay minutes in 2007. Major contributing factors to delay include weather, congestion, and aging infrastructure; the Next Generation Air Transportation System (NextGen) aims to alleviate these delays through an upgrade of the air traffic control system. Changes to large-scale networked systems such as air traffic control are complicated by the need for coordinated solutions over disparate temporal and spatial scales. Individual air traffic controllers must ensure aircraft maintain safe separation locally with a time horizon of seconds to minutes, whereas regional plans are formulated to efficiently route flows of aircraft around weather and congestion on the order of every hour. More efficient control algorithms that provide a coordinated solution are required to safely handle a larger number of aircraft in a fixed amount of airspace. Improved estimation algorithms are also needed to provide accurate aircraft state information and situational awareness for human controllers. A hierarchical framework is developed to simultaneously solve the sometimes conflicting goals of regional efficiency and local safety. Careful attention is given in defining the interactions between the layers of this hierarchy. In this way, solutions to individual air traffic problems can be targeted and implemented as needed. First, the regional traffic flow management problem is posed as an optimization problem and shown to be NP-Hard. Approximation methods based on aggregate flow models are developed to enable real-time implementation of algorithms that reduce the impact of congestion and adverse weather. Second, the local trajectory design problem is solved using a novel slot-based sector model. This model is used to analyze sector capacity under varying traffic patterns, providing a more comprehensive understanding of how increased automation in NextGen will affect the overall performance of air traffic control. The dissertation also provides solutions to several key estimation problems that support corresponding control tasks. Throughout the development of these estimation algorithms, aircraft motion is modeled using hybrid systems, which encapsulate both the discrete flight mode of an aircraft and the evolution of continuous states such as position and velocity. The target-tracking problem is posed as one of hybrid state estimation, and two new algorithms are developed to exploit structure specific to aircraft motion, especially near airports. First, discrete mode evolution is modeled using state-dependent transitions, in which the likelihood of changing flight modes is dependent on aircraft state. Second, an estimator is designed for systems with limited mode changes, including arrival aircraft. Improved target tracking facilitates increased safety in collision avoidance and trajectory design problems. A multiple-target tracking and identity management algorithm is developed to improve situational awareness for controllers about multiple maneuvering targets in a congested region. Finally, tracking algorithms are extended to predict aircraft landing times; estimated time of arrival prediction is one example of important decision support information for air traffic control.

  12. Understanding Measurements Returned by the Helioseismic and Magnetic Imager

    NASA Astrophysics Data System (ADS)

    Cohen, Daniel Parke; Criscuoli, Serena

    2014-06-01

    The Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory (SDO) observes the Sun at the FeI 6173 Å line and returns full disk maps of line-of-sight observables including the magnetic field flux, FeI line width, line depth, and continuum intensity. To properly interpret such data it is important to understand any issues with the HMI and the pipeline that produces these observables. At this aim, HMI data were analyzed at both daily intervals for a span of 3 years at disk center in the quiet Sun and hourly intervals for a span of 200 hours around an active region. Systematic effects attributed to issues with instrument adjustments and re-calibrations, variations in the transmission filters and the orbital velocities of the SDO were found while the actual physical evolutions of such observables were difficult to determine. Velocities and magnetic flux measurements are less affected, as the aforementioned effects are partially compensated for by the HMI algorithm; the other observables are instead affected by larger uncertainties. In order to model these uncertainties, the HMI pipeline was tested with synthetic spectra generated through various 1D atmosphere models with radiative transfer code (the RH code). It was found that HMI estimates of line width, line depth, and continuum intensity are highly dependent on the shape of the line, and therefore highly dependent on the line-of-sight angle and the magnetic field associated to the model. The best estimates are found for Quiet regions at disk center, for which the relative differences between theoretical and HMI algorithm values are 6-8% for line width, 10-15% for line depth, and 0.1-0.2% for continuum intensity. In general, the relative difference between theoretical values and HMI estimates increases toward the limb and with the increase of the field; the HMI algorithm seems to fail in regions with fields larger than ~2000 G. This work is carried out through the National Solar Observatory Research Experiences for Undergraduate (REU) site program, which is co-funded by the Department of Defense in partnership with the NSF REU Program. The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation.

  13. Time Series Reconstruction of Surface Flow Velocity on Marine-terminating Outlet Glaciers

    NASA Astrophysics Data System (ADS)

    Jeong, Seongsu

    The flow velocity of glacier and its fluctuation are valuable data to study the contribution of sea level rise of ice sheet by understanding its dynamic structure. Repeat-image feature tracking (RIFT) is a platform-independent, feature tracking-based velocity measurement methodology effective for building a time series of velocity maps from optical images. However, limited availability of perfectly-conditioned images motivated to improve robustness of the algorithm. With this background, we developed an improved RIFT algorithm based on multiple-image multiple-chip algorithm presented in Ahn and Howat (2011). The test results affirm improvement in the new RIFT algorithm in avoiding outlier, and the analysis of the multiple matching results clarified that each individual matching results worked in complementary manner to deduce the correct displacements. LANDSAT 8 is a new satellite in LANDSAT program that has begun its operation since 2013. The improved radiometric performance of OLI aboard the satellite is expected to enable better velocity mapping results than ETM+ aboard LANDSAT 7. However, it was not yet well studied that in what cases the new will sensor will be beneficial, and how much the improvement will be obtained. We carried out a simulation-based comparison between ETM+ and OLI and confirmed OLI outperforms ETM+ especially in low contrast conditions, especially in polar night, translucent cloud covers, and bright upglacier with less texture. We have identified a rift on ice shelf of Pine island glacier located in western Antarctic ice sheet. Unlike the previous events, the evolution of the current started from the center of the ice shelf. In order to analyze this unique event, we utilized the improved RIFT algorithm to its OLI images to retrieve time series of velocity maps. We discovered from the analyses that the part of ice shelf below the rift is changing its speed, and shifting of splashing crevasses on shear margin is migrating to the center of the shelf. Concerning the concurrent disintegration of ice melange on its western part of the terminus, we postulate that change in flow regime attributes to loss of resistance force exerted by the melange. There are several topics that need to be addressed for further improve the RIFT algorithm. As coregistration error is significant contributor to the velocity measurement, a method to mitigate that error needs to be devised. Also, considering that the domain of RIFT product spans not only in space but also in time, its regridding and gap filling work will benefit from extending its domain to both space and time.

  14. Teleseismic tomography for imaging Earth's upper mantle

    NASA Astrophysics Data System (ADS)

    Aktas, Kadircan

    Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.

  15. VizieR Online Data Catalog: HD20794 HARPS radial velocities (Feng+, 2017)

    NASA Astrophysics Data System (ADS)

    Feng, F.; Tuomi, M.; Jones, H. R. A.

    2017-05-01

    HARPS radial velocities, activity indices and differential radial velocities for HD 20794. The HARPS spectra are available in the European Southern Observatory archive, and are processed using the TERRA algorithm (Anglada-Escude and Butler, 2012, Cat. J/ApJS/200/15). (1 data file).

  16. Acoustic emission-based sensor analysis and damage classification for structural health monitoring of composite structures

    NASA Astrophysics Data System (ADS)

    Uprety, Bibhisha

    Within the aerospace industry the need to detect and locate impact events, even when no visible damage is present, is important both from the maintenance and design perspectives. This research focused on the use of Acoustic Emission (AE) based sensing technologies to identify impact events and characterize damage modes in composite structures for structural health monitoring. Six commercially available piezoelectric AE sensors were evaluated for use with impact location estimation algorithms under development at the University of Utah. Both active and passive testing were performed to estimate the time of arrival and plate wave mode velocities for impact location estimation. Four sensors were recommended for further comparative investigations. Furthermore, instrumented low-velocity impact experiments were conducted on quasi-isotropic carbon/epoxy composite laminates to initiate specific types of damage: matrix cracking, delamination and fiber breakage. AE signal responses were collected during impacting and the test panels were ultrasonically C-scanned after impact to identify the internal damage corresponding to the AE signals. Matrix cracking and delamination damage produced using more compliant test panels and larger diameter impactor were characterized by lower frequency signals while fiber breakage produced higher frequency responses. The results obtained suggest that selected characteristics of sensor response signals can be used both to determine whether damage is produced during impacting and to characterize the types of damage produced in an impacted composite structure.

  17. An interactive Doppler velocity dealiasing scheme

    NASA Astrophysics Data System (ADS)

    Pan, Jiawen; Chen, Qi; Wei, Ming; Gao, Li

    2009-10-01

    Doppler weather radars are capable of providing high quality wind data at a high spatial and temporal resolution. However, operational application of Doppler velocity data from weather radars is hampered by the infamous limitation of the velocity ambiguity. This paper reviews the cause of velocity folding and presents the unfolding method recently implemented for the CINRAD systems. A simple interactive method for velocity data, which corrects de-aliasing errors, has been developed and tested. It is concluded that the algorithm is very efficient and produces high quality velocity data.

  18. Analysis of convergence of an evolutionary algorithm with self-adaptation using a stochastic Lyapunov function.

    PubMed

    Semenov, Mikhail A; Terkel, Dmitri A

    2003-01-01

    This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.

  19. A simple algorithm for sequentially incorporating gravity observations in seismic traveltime tomography

    USGS Publications Warehouse

    Parsons, T.; Blakely, R.J.; Brocher, T.M.

    2001-01-01

    The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.

  20. A statistical framework for genetic association studies of power curves in bird flight

    PubMed Central

    Lin, Min; Zhao, Wei

    2006-01-01

    How the power required for bird flight varies as a function of forward speed can be used to predict the flight style and behavioral strategy of a bird for feeding and migration. A U-shaped curve was observed between the power and flight velocity in many birds, which is consistent to the theoretical prediction by aerodynamic models. In this article, we present a general genetic model for fine mapping of quantitative trait loci (QTL) responsible for power curves in a sample of birds drawn from a natural population. This model is developed within the maximum likelihood context, implemented with the EM algorithm for estimating the population genetic parameters of QTL and the simplex algorithm for estimating the QTL genotype-specific parameters of power curves. Using Monte Carlo simulation derived from empirical observations of power curves in the European starling (Sturnus vulgaris), we demonstrate how the underlying QTL for power curves can be detected from molecular markers and how the QTL detected affect the most appropriate flight speeds used to design an optimal migration strategy. The results from our model can be directly integrated into a conceptual framework for understanding flight origin and evolution. PMID:17066123

  1. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  2. Application of the one-dimensional Fourier transform for tracking moving objects in noisy environments

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.; Riddle, A. N.; Snyder, W. E.

    1983-01-01

    In Riddle and Rajala (1981), an algorithm was presented which operates on an image sequence to identify all sets of pixels having the same velocity. The algorithm operates by performing a transformation in which all pixels with the same two-dimensional velocity map to a peak in a transform space. The transform can be decomposed into applications of the one-dimensional Fourier transform and therefore can gain from the computational advantages of the FFT. The aim of this paper is the concern with the fundamental limitations of that algorithm, particularly as relates to its sensitivity to image-disturbing parameters as noise, jitter, and clutter. A modification to the algorithm is then proposed which increases its robustness in the presence of these disturbances.

  3. Microseismic monitoring of soft-rock landslide: contribution of a 3D velocity model for the location of seismic sources.

    NASA Astrophysics Data System (ADS)

    Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter

    2015-04-01

    Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.

  4. Estimating secular velocities from GPS data contaminated by postseismic motion at sites with limited pre-earthquake data

    NASA Astrophysics Data System (ADS)

    Murray, J. R.; Svarc, J. L.

    2016-12-01

    Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.

  5. Velocity variations and uncertainty from transdimensional P-wave tomography of North America

    NASA Astrophysics Data System (ADS)

    Burdick, Scott; Lekić, Vedran

    2017-05-01

    High-resolution models of seismic velocity variations constructed using body-wave tomography inform the study of the origin, fate and thermochemical state of mantle domains. In order to reliably relate these variations to material properties including temperature, composition and volatile content, we must accurately retrieve both the patterns and amplitudes of variations and quantify the uncertainty associated with the estimates of each. For these reasons, we image the mantle beneath North America with P-wave traveltimes from USArray using a novel method for 3-D probabilistic body-wave tomography. The method uses a Transdimensional Hierarchical Bayesian framework with a reversible-jump Markov Chain Monte Carlo algorithm in order to generate an ensemble of possible velocity models. We analyse this ensemble solution to obtain the posterior probability distribution of velocities, thereby yielding error bars and enabling rigorous hypothesis testing. Overall, we determine that the average uncertainty (1σ) of compressional wave velocity estimates beneath North America is ∼0.25 per cent dVP/VP, increasing with proximity to complex structure and decreasing with depth. The addition of USArray data reduces the uncertainty beneath the Eastern US by over 50 per cent in the upper mantle and 25-40 per cent below the transition zone and ∼30 per cent throughout the mantle beneath the Western US. In the absence of damping and smoothing, we recover amplitudes of variations 10-80 per cent higher than a standard inversion approach. Accounting for differences in data coverage, we infer that the length scale of heterogeneity is ∼50 per cent longer at shallow depths beneath the continental platform than beneath tectonically active regions. We illustrate the model trade-off analysis for the Cascadia slab and the New Madrid Seismic Zone, where we find that smearing due to the limitations of the illumination is relatively minor.

  6. Gas-hydrate concentration estimated from P- and S-wave velocities at the Mallik 2L-38 research well, Mackenzie Delta, Canada

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Gei, Davide

    2004-05-01

    We estimate the concentration of gas hydrate at the Mallik 2L-38 research site using P- and S-wave velocities obtained from well logging and vertical seismic profiles (VSP). The theoretical velocities are obtained from a generalization of Gassmann's modulus to three phases (rock frame, gas hydrate and fluid). The dry-rock moduli are estimated from the log profiles, in sections where the rock is assumed to be fully saturated with water. We obtain hydrate concentrations up to 75%, average values of 37% and 21% from the VSP P- and S-wave velocities, respectively, and 60% and 57% from the sonic-log P- and S-wave velocities, respectively. The above averages are similar to estimations obtained from hydrate dissociation modeling and Archie methods. The estimations based on the P-wave velocities are more reliable than those based on the S-wave velocities.

  7. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  8. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept

    PubMed Central

    Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi

    2015-01-01

    Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the aorta itself. PMID:26163442

  9. Heat and solute tracers: how do they compare in heterogeneous aquifers?

    PubMed

    Irvine, Dylan J; Simmons, Craig T; Werner, Adrian D; Graf, Thomas

    2015-04-01

    A comparison of groundwater velocity in heterogeneous aquifers estimated from hydraulic methods, heat and solute tracers was made using numerical simulations. Aquifer heterogeneity was described by geostatistical properties of the Borden, Cape Cod, North Bay, and MADE aquifers. Both heat and solute tracers displayed little systematic under- or over-estimation in velocity relative to a hydraulic control. The worst cases were under-estimates of 6.63% for solute and 2.13% for the heat tracer. Both under- and over-estimation of velocity from the heat tracer relative to the solute tracer occurred. Differences between the estimates from the tracer methods increased as the mean velocity decreased, owing to differences in rates of molecular diffusion and thermal conduction. The variance in estimated velocity using all methods increased as the variance in log-hydraulic conductivity (K) and correlation length scales increased. The variance in velocity for each scenario was remarkably small when compared to σ2 ln(K) for all methods tested. The largest variability identified was for the solute tracer where 95% of velocity estimates ranged by a factor of 19 in simulations where 95% of the K values varied by almost four orders of magnitude. For the same K-fields, this range was a factor of 11 for the heat tracer. The variance in estimated velocity was always lowest when using heat as a tracer. The study results suggest that a solute tracer will provide more understanding about the variance in velocity caused by aquifer heterogeneity and a heat tracer provides a better approximation of the mean velocity. © 2013, National Ground Water Association.

  10. Dynamic-MLC leaf control utilizing on-flight intensity calculations: a robust method for real-time IMRT delivery over moving rigid targets.

    PubMed

    McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy

    2007-08-01

    An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.

  11. Ultrasonic device for real-time sewage velocity and suspended particles concentration measurements.

    PubMed

    Abda, F; Azbaid, A; Ensminger, D; Fischer, S; François, P; Schmitt, P; Pallarès, A

    2009-01-01

    In the frame of a technological research and innovation network in water and environment technologies (RITEAU, Réseau de Recherche et d'Innovation Technologique Eau et Environnement), our research group, in collaboration with industrial partners and other research institutions, has been in charge of the development of a suitable flowmeter: an ultrasonic device measuring simultaneously the water flow and the concentration of size classes of suspended particles. Working on the pulsed ultrasound principle, our multi-frequency device (1 to 14 MHz) allows flow velocity and water height measurement and estimation of suspended solids concentration. Velocity measurements rely on the coherent Doppler principle. A self developed frequency estimator, so called Spectral Identification method, was used and compared to the classical Pulse-Pair method. Several measurements campaigns on one wastewater collector of the French city of Strasbourg gave very satisfactory results and showed smaller standard deviation values for the Doppler frequency extracted by the Spectral Identification method. A specific algorithm was also developed for the water height measurements. It relies on the water surface acoustic impedance rupture and its peak localisation and behaviour in the collected backscattering data. This algorithm was positively tested on long time measurements on the same wastewater collector. A large part of the article is devoted to the measurements of the suspended solids concentrations. Our data analysis consists in the adaptation of the well described acoustic behaviour of sand to the behaviour of wastewater particles. Both acoustic attenuation and acoustic backscattering data over multiple frequencies are analyzed for the extrapolation of size classes and respective concentrations. Under dry weather conditions, the massic backscattering coefficient and the overall size distribution showed similar evolution whatever the measurement site was and were suggesting a global wastewater particles behaviour. By comparison to sampling data, our data analysis lead to the characterization of two particle groups: the ones occurring during rain events and the ones typical of wastewater under dry weather conditions. Even with already encouraging results on the several weeks of data recorded on several wastewater collectors, the validation of our data inversion method is still under progress.

  12. Formation Flying Satellite Control Around the L2 Sun-Earth Libration Point

    NASA Technical Reports Server (NTRS)

    Hamilton, Nicholas H.; Folta, David; Carpenter, Russell; Bauer, Frank (Technical Monitor)

    2002-01-01

    This paper discusses the development of a linear control algorithm for formations in the vicinity of the L2 sun-Earth libration point. The development of a simplified extended Kalman filter is included as well. Simulations are created for the analysis of the stationkeeping and various formation maneuvers of the Stellar Imager mission. The simulations provide tracking error, estimation error, and control effort results. For formation maneuvering, the formation spacecraft track to within 4 meters of their desired position and within 1.5 millimeters per second of their desired zero velocity. The filter, with few exceptions, keeps the estimation errors within their three-sigma values. Without noise, the controller performs extremely well, with the formation spacecraft tracking to within several micrometers. Each spacecraft uses around 1 to 2 grams of propellant per maneuver, depending on the circumstances.

  13. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    PubMed Central

    Persoons, Tim; O’Donovan, Tadhg S.

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods. PMID:22346564

  14. Inversion of azimuthally dependent NMO velocity in transversely isotropic media with a tilted axis of symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechka, V.; Tsvankin, I.

    2000-02-01

    Just as the transversely isotropic model with a vertical symmetry axis (VTI media) is typical for describing horizontally layered sediments, transverse isotropy with a tilted symmetry axis (TTI) describes dipping TI layers (such as tilted shale beds near salt domes) or crack systems. P-wave kinematic signatures in TTI media are controlled by the velocity V{sub PO} in the symmetry direction, Thomsen's anisotropic coefficients {xi} and {delta}, and the orientation (tilt {nu} and azimuth {beta}) of the symmetry axis. Here, the authors show that all five parameters can be obtained from azimuthally varying P-wave NMO velocities measured for two reflectors withmore » different dips and/or azimuths (one of the reflectors can be horizontal). The shear-wave velocity V{sub SO} in the symmetry direction, which has negligible influence on P-wave kinematic signatures, can be found only from the moveout of shear waves. Using the exact NMO equation, the authors examine the propagation of errors in observed moveout velocities into estimated values of the anisotropic parameters and establish the necessary conditions for a stable inversion procedure. Since the azimuthal variation of the NMO velocity is elliptical, each reflection event provides them with up to three constraints on the model parameters. Generally, the five parameters responsible for P-wave velocity can be obtained from two P-wave ellipses, but the feasibility of the moveout inversion strongly depends on the tilt {nu}. While most of the analysis is carried out for a single layer, the authors also extend the inversion algorithm to vertically heterogeneous TTI media above a dipping reflector using the generalized Dix equation. A synthetic example for a strongly anisotropic, stratified TTI medium demonstrates a high accuracy of the inversion.« less

  15. A Comprehensive Framework for Use of NEXRAD Data in Hydrometeorology and Hydrology

    NASA Astrophysics Data System (ADS)

    Krajewski, W. F.; Bradley, A.; Kruger, A.; Lawrence, R. E.; Smith, J. A.; Steiner, M.; Ramamurthy, M. K.; del Greco, S. A.

    2004-12-01

    The overall objective of this project is to provide the broad science and engineering communities with ready access to the vast archives and real-time information collected by the national network of NEXRAD weather radars. The main focus is on radar-rainfall data for use in hydrology, hydrometeorology, and water resources. Currently, the NEXRAD data, which are archived at NOAA's National Climatic Data Center (NCDC), are converted to operational products and used by forecasters in real time. The scientific use of the full resolution NEXRAD information is presently limited because current methods of accessing this data require considerable expertise in weather radars, data quality control, formatting and handling, and radar-rainfall algorithms. The goal is to provide professionals in the scientific, engineering, education, and public policy sectors with on-demand NEXRAD data and custom products that are at high spatial and temporal resolutions. Furthermore, the data and custom products will be of a quality suitable for scientific discovery in hydrology and hydrometeorology and in data formats that are convenient to a wide spectrum of users. We are developing a framework and a set of tools for access, visualization, management, rainfall estimation algorithms, and scientific analysis of full resolution NEXRAD data. The framework will address the issues of data dissemination, format conversions and compression, management of terabyte-sized datasets, rapid browsing and visualization, metadata selection and calculation, relational and XML databases, integration with geographic information systems, data queries and knowledge mining, and Web Services. The tools will perform instantaneous comprehensive quality control and radar-rainfall estimation using a variety of algorithms. The algorithms that the user can select will range from "quick look" to complex, and computing-intensive and will include operational algorithms used by federal agencies as well as research grade experimental methods. Options available to the user will include user-specified spatial and temporal resolution, ancillary products such as storm advection velocity fields, estimation of uncertainty associated with rainfall maps, and mathematical synthesis of the products. The data and the developed tools will be provided to the community via the services and the infrastructure of Unidata and the NCDC.

  16. Moving target detection for frequency agility radar by sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  17. Moving target detection for frequency agility radar by sparse reconstruction.

    PubMed

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  18. Universal algorithms and programs for calculating the motion parameters in the two-body problem

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.

  19. Inversion of multicomponent seismic data and rock-physics intepretation for evaluating lithology, fracture and fluid distribution in heterogeneous anisotropic reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilya Tsvankin; Kenneth L. Larner

    2004-11-17

    Within the framework of this collaborative project with the Lawrence Livermore National Laboratory (LLNL) and Stanford University, the Colorado School of Mines (CSM) group developed and implemented a new efficient approach to the inversion and processing of multicomponent, multiazimuth seismic data in anisotropic media. To avoid serious difficulties in the processing of mode-converted (PS) waves, we devised a methodology for transforming recorded PP- and PS-wavefields into the corresponding SS-wave reflection data that can be processed by velocity-analysis algorithms designed for pure (unconverted) modes. It should be emphasized that this procedure does not require knowledge of the velocity model and canmore » be applied to data from arbitrarily anisotropic, heterogeneous media. The azimuthally varying reflection moveouts of the PP-waves and constructed SS-waves are then combined in anisotropic stacking-velocity tomography to estimate the velocity field in the depth domain. As illustrated by the case studies discussed in the report, migration of the multicomponent data with the obtained anisotropic velocity model yields a crisp image of the reservoir that is vastly superior to that produced by conventional methods. The scope of this research essentially amounts to building the foundation of 3D multicomponent, anisotropic seismology. We have also worked with the LLNL and Stanford groups on relating the anisotropic parameters obtained from seismic data to stress, lithology, and fluid distribution using a generalized theoretical treatment of fractured, poroelastic rocks.« less

  20. Seismic structure beneath Mt Vesuvius from receiver function analysis and local earthquakes tomography: evidences for location and geometry of the magma chamber

    NASA Astrophysics Data System (ADS)

    Agostinetti, N. Piana; Chiarabba, C.

    2008-12-01

    The recognition and localization of magmatic fluids are pre-requisites for evaluating the volcano hazard of the highly urbanized area of Mt Vesuvius. Here we show evidence and constraints for the volumetric estimation of magmatic fluids underneath this sleeping volcano. We use Receiver Functions for teleseismic data recorded at a temporary broad-band station installed on the volcano to constrain the S-wave velocity structure in the crust. Receiver Functions are analysed and inverted using the Neighbourhood Algorithm approach. The 1-D S-velocity profile is jointly interpreted and discussed with a new Vp and Vp/Vs image obtained by applying double difference tomographic techniques to local earthquakes. Seismologic data define the geometry of an axial, cylindrical high Vp, high Vs body consisting of a shallow solidified materials, probably the remnants of the caldera, and ultramafic rocks paving the crustal magma chamber. Between these two anomalies, we find a small region where the shear wave velocity drops, revealing the presence of magma at relatively shallow depths. The volume of fluids (30 km3) is sufficient to contribute future explosive eruptions.

  1. Prestack depth migration for complex 2D structure using phase-screen propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.; Huang, Lian-Jie; Burch, C.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less

  2. Application of acoustic-Doppler current profiler and expendable bathythermograph measurements to the study of the velocity structure and transport of the Gulf Stream

    NASA Technical Reports Server (NTRS)

    Joyce, T. M.; Dunworth, J. A.; Schubert, D. M.; Stalcup, M. C.; Barbour, R. L.

    1988-01-01

    The degree to which Acoustic-Doppler Current Profiler (ADCP) and expendable bathythermograph (XBT) data can provide quantitative measurements of the velocity structure and transport of the Gulf Stream is addressed. An algorithm is used to generate salinity from temperature and depth using an historical Temperature/Salinity relation for the NW Atlantic. Results have been simulated using CTD data and comparing real and pseudo salinity files. Errors are typically less than 2 dynamic cm for the upper 800 m out of a total signal of 80 cm (across the Gulf Stream). When combined with ADCP data for a near-surface reference velocity, transport errors in isopycnal layers are less than about 1 Sv (10 to the 6th power cu m/s), as is the difference in total transport for the upper 800 m between real and pseudo data. The method is capable of measuring the real variability of the Gulf Stream, and when combined with altimeter data, can provide estimates of the geoid slope with oceanic errors of a few parts in 10 to the 8th power over horizontal scales of 500 km.

  3. Effects of external intermittency and mean shear on the spectral inertial-range exponent in a turbulent square jet

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Xu, M.; Pollard, A.; Mi, J.

    2013-05-01

    This study investigates by experiment the dependence of the inertial-range exponent m of the streamwise velocity spectrum on the external intermittency factor γ (≡ the fraction of time the flow is fully turbulent) and the mean shear S in a turbulent square jet. Velocity measurements were made using hot-wire anemometry in the jet at 15 < x/De < 40, where De denotes the exit equivalent diameter, and for an exit Reynolds number of Re = 50 000. The Taylor microscale Reynolds number Rλ varies from about 70 to 450 in the present study. The TERA (turbulent energy recognition algorithm) method proposed by Falco and Gendrich [in Near-Wall Turbulence: 1988 Zoran Zariç Memorial Conference, edited by S. J. Kline and N. H. Afgan (Hemisphere Publishing Corp., Washington, DC, 1990), pp. 911-931] is discussed and applied to estimate the intermittency factor from velocity signals. It is shown that m depends strongly on γ but negligibly on S. More specifically, m varies with γ following m=mt+(lnγ-0.0173)1/2, where mt denotes the spectral exponent found in fully turbulent regions.

  4. Migration of dispersive GPR data

    USGS Publications Warehouse

    Powers, M.H.; Oden, C.P.; ,

    2004-01-01

    Electrical conductivity and dielectric and magnetic relaxation phenomena cause electromagnetic propagation to be dispersive in earth materials. Both velocity and attenuation may vary with frequency, depending on the frequency content of the propagating energy and the nature of the relaxation phenomena. A minor amount of velocity dispersion is associated with high attenuation. For this reason, measuring effects of velocity dispersion in ground penetrating radar (GPR) data is difficult. With a dispersive forward model, GPR responses to propagation through materials with known frequency-dependent properties have been created. These responses are used as test data for migration algorithms that have been modified to handle specific aspects of dispersive media. When either Stolt or Gazdag migration methods are modified to correct for just velocity dispersion, the results are little changed from standard migration. For nondispersive propagating wavefield data, like deep seismic, ensuring correct phase summation in a migration algorithm is more important than correctly handling amplitude. However, the results of migrating model responses to dispersive media with modified algorithms indicate that, in this case, correcting for frequency-dependent amplitude loss has a much greater effect on the result than correcting for proper phase summation. A modified migration is only effective when it includes attenuation recovery, performing deconvolution and migration simultaneously.

  5. Guided filter and convolutional network based tracking for infrared dim moving target

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Qin, Hanlin; Rong, Shenghui; Zhao, Dong; Du, Juan

    2017-09-01

    The dim moving target usually submerges in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio. A tracking algorithm that integrates the Guided Image Filter (GIF) and the Convolutional neural network (CNN) into the particle filter framework is presented to cope with the uncertainty of dim targets. First, the initial target template is treated as a guidance to filter incoming templates depending on similarities between the guidance and candidate templates. The GIF algorithm utilizes the structure in the guidance and performs as an edge-preserving smoothing operator. Therefore, the guidance helps to preserve the detail of valuable templates and makes inaccurate ones blurry, alleviating the tracking deviation effectively. Besides, the two-layer CNN method is adopted to obtain a powerful appearance representation. Subsequently, a Bayesian classifier is trained with these discriminative yet strong features. Moreover, an adaptive learning factor is introduced to prevent the update of classifier's parameters when a target undergoes sever background. At last, classifier responses of particles are utilized to generate particle importance weights and a re-sample procedure preserves samples according to the weight. In the predication stage, a 2-order transition model considers the target velocity to estimate current position. Experimental results demonstrate that the presented algorithm outperforms several relative algorithms in the accuracy.

  6. The design and development of signal-processing algorithms for an airborne x-band Doppler weather radar

    NASA Technical Reports Server (NTRS)

    Nicholson, Shaun R.

    1994-01-01

    Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.

  7. Application of velocity filtering to optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  8. Streamflow Observations From Cameras: Large-Scale Particle Image Velocimetry or Particle Tracking Velocimetry?

    NASA Astrophysics Data System (ADS)

    Tauro, F.; Piscopia, R.; Grimaldi, S.

    2017-12-01

    Image-based methodologies, such as large scale particle image velocimetry (LSPIV) and particle tracking velocimetry (PTV), have increased our ability to noninvasively conduct streamflow measurements by affording spatially distributed observations at high temporal resolution. However, progress in optical methodologies has not been paralleled by the implementation of image-based approaches in environmental monitoring practice. We attribute this fact to the sensitivity of LSPIV, by far the most frequently adopted algorithm, to visibility conditions and to the occurrence of visible surface features. In this work, we test both LSPIV and PTV on a data set of 12 videos captured in a natural stream wherein artificial floaters are homogeneously and continuously deployed. Further, we apply both algorithms to a video of a high flow event on the Tiber River, Rome, Italy. In our application, we propose a modified PTV approach that only takes into account realistic trajectories. Based on our findings, LSPIV largely underestimates surface velocities with respect to PTV in both favorable (12 videos in a natural stream) and adverse (high flow event in the Tiber River) conditions. On the other hand, PTV is in closer agreement than LSPIV with benchmark velocities in both experimental settings. In addition, the accuracy of PTV estimations can be directly related to the transit of physical objects in the field of view, thus providing tangible data for uncertainty evaluation.

  9. Adaptive spectral filtering of PIV cross correlations

    NASA Astrophysics Data System (ADS)

    Giarra, Matthew; Vlachos, Pavlos; Aether Lab Team

    2016-11-01

    Using cross correlations (CCs) in particle image velocimetry (PIV) assumes that tracer particles in interrogation regions (IRs) move with the same velocity. But this assumption is nearly always violated because real flows exhibit velocity gradients, which degrade the signal-to-noise ratio (SNR) of the CC and are a major driver of error in PIV. Iterative methods help reduce these errors, but even they can fail when gradients are large within individual IRs. We present an algorithm to mitigate the effects of velocity gradients on PIV measurements. Our algorithm is based on a model of the CC, which predicts a relationship between the PDF of particle displacements and the variation of the correlation's SNR across the Fourier spectrum. We give an algorithm to measure this SNR from the CC, and use this insight to create a filter that suppresses the low-SNR portions of the spectrum. Our algorithm extends to the ensemble correlation, where it accelerates the convergence of the measurement and also reveals the PDF of displacements of the ensemble (and therefore of statistical metrics like diffusion coefficient). Finally, our model provides theoretical foundations for a number of "rules of thumb" in PIV, like the quarter-window rule.

  10. Fast Plane Wave 2-D Vector Flow Imaging Using Transverse Oscillation and Directional Beamforming.

    PubMed

    Jensen, Jonas; Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-07-01

    Several techniques can estimate the 2-D velocity vector in ultrasound. Directional beamforming (DB) estimates blood flow velocities with a higher precision and accuracy than transverse oscillation (TO), but at the cost of a high beamforming load when estimating the flow angle. In this paper, it is proposed to use TO to estimate an initial flow angle, which is then refined in a DB step. Velocity magnitude is estimated along the flow direction using cross correlation. It is shown that the suggested TO-DB method can improve the performance of velocity estimates compared with TO, and with a beamforming load, which is 4.6 times larger than for TO and seven times smaller than for conventional DB. Steered plane wave transmissions are employed for high frame rate imaging, and parabolic flow with a peak velocity of 0.5 m/s is simulated in straight vessels at beam-to-flow angles from 45° to 90°. The TO-DB method estimates the angle with a bias and standard deviation (SD) less than 2°, and the SD of the velocity magnitude is less than 2%. When using only TO, the SD of the angle ranges from 2° to 17° and for the velocity magnitude up to 7%. Bias of the velocity magnitude is within 2% for TO and slightly larger but within 4% for TO-DB. The same trends are observed in measurements although with a slightly larger bias. Simulations of realistic flow in a carotid bifurcation model provide visualization of complex flow, and the spread of velocity magnitude estimates is 7.1 cm/s for TO-DB, while it is 11.8 cm/s using only TO. However, velocities for TO-DB are underestimated at peak systole as indicated by a regression value of 0.97 for TO and 0.85 for TO-DB. An in vivo scanning of the carotid bifurcation is used for vector velocity estimations using TO and TO-DB. The SD of the velocity profile over a cardiac cycle is 4.2% for TO and 3.2% for TO-DB.

  11. North American Crust and Upper Mantle Structure Imaged Using an Adaptive Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Eilon, Z.; Fischer, K. M.; Dalton, C. A.

    2017-12-01

    We present a methodology for imaging upper mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parameterization based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing increased computing power alongside sophisticated data analysis, with the flexibility to include multiple datatypes with complementary resolution. Our new method has been designed to simultaneously fit P-s and S-p converted phases and Rayleigh wave phase velocities measured from ambient noise (periods 6-40 s) and earthquake sources (periods 30-170s). Careful processing of the body wave data isolates the signals from velocity gradients between the mid-crust and 250 km depth. We jointly invert the body and surface wave data to obtain detailed 1-D velocity models that include robustly imaged mantle discontinuities. Synthetic tests demonstrate that S-p phases are particularly important for resolving mantle structure, while surface waves capture absolute velocities with resolution better than 0.1 km/s. By treating data noise as an unknown parameter, and by generating posterior parameter distributions, model trade offs and uncertainties are fully captured by the inversion. We apply the method to stations across the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles and offering robust uncertainty estimates. In the tectonically active northwestern US, a strong velocity drop immediately beneath the Moho connotes thin (<70 km) lithosphere and a sharp lithosphere-asthenosphere transition; the asthenospheric velocity profile here matches observations at mid-ocean ridges. Within the Wyoming and Superior cratons, our models reveal mid-lithospheric velocity gradients indicative of thermochemical cratonic layering, but the lithosphere-asthenosphere boundary is relatively gradual. This flexible method holds promise for increasingly detailed understanding of the lithosphere-asthenosphere system.

  12. Estimation of the velocity and trajectory of three-dimensional reaching movements from non-invasive magnetoencephalography signals

    NASA Astrophysics Data System (ADS)

    Yeom, Hong Gi; Sic Kim, June; Chung, Chun Kee

    2013-04-01

    Objective. Studies on the non-invasive brain-machine interface that controls prosthetic devices via movement intentions are at their very early stages. Here, we aimed to estimate three-dimensional arm movements using magnetoencephalography (MEG) signals with high accuracy. Approach. Whole-head MEG signals were acquired during three-dimensional reaching movements (center-out paradigm). For movement decoding, we selected 68 MEG channels in motor-related areas, which were band-pass filtered using four subfrequency bands (0.5-8, 9-22, 25-40 and 57-97 Hz). After the filtering, the signals were resampled, and 11 data points preceding the current data point were used as features for estimating velocity. Multiple linear regressions were used to estimate movement velocities. Movement trajectories were calculated by integrating estimated velocities. We evaluated our results by calculating correlation coefficients (r) between real and estimated velocities. Main results. Movement velocities could be estimated from the low-frequency MEG signals (0.5-8 Hz) with significant and considerably high accuracy (p <0.001, mean r > 0.7). We also showed that preceding (60-140 ms) MEG signals are important to estimate current movement velocities and the intervals of brain signals of 200-300 ms are sufficient for movement estimation. Significance. These results imply that disabled people will be able to control prosthetic devices without surgery in the near future.

  13. Effects of red blood cell aggregates dissociation on the estimation of ultrasound speckle image velocimetry.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-08-01

    Ultrasound speckle image of blood is mainly attributed by red blood cells (RBCs) which tend to form RBC aggregates. RBC aggregates are separated into individual cells when the shear force is over a certain value. The dissociation of RBC aggregates has an influence on the performance of ultrasound speckle image velocimetry (SIV) technique in which a cross-correlation algorithm is applied to the speckle images to get the velocity field information. The present study aims to investigate the effect of the dissociation of RBC aggregates on the estimation quality of SIV technique. Ultrasound B-mode images were captured from the porcine blood circulating in a mock-up flow loop with varying flow rate. To verify the measurement performance of SIV technique, the centerline velocity measured by the SIV technique was compared with that measured by Doppler spectrograms. The dissociation of RBC aggregates was estimated by using decorrelation of speckle patterns in which the subsequent window was shifted as much as the speckle displacement to compensate decorrelation caused by in-plane loss of speckle patterns. The decorrelation of speckles is considerably increased according to shear rate. Its variations are different along the radial direction. Because the dissociation of RBC aggregates changes ultrasound speckles, the estimation quality of SIV technique is significantly correlated with the decorrelation of speckles. This degradation of measurement quality may be improved by increasing the data acquisition rate. This study would be useful for simultaneous measurement of hemodynamic and hemorheological information of blood flows using only speckle images. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    NASA Astrophysics Data System (ADS)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt

    2011-03-01

    Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.

  15. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  16. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  17. Novel mathematical algorithm for pupillometric data analysis.

    PubMed

    Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C

    2014-01-01

    Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Blooming Trees: Substructures and Surrounding Groups of Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Yu, Heng; Diaferio, Antonaldo; Serra, Ana Laura; Baldi, Marco

    2018-06-01

    We develop the Blooming Tree Algorithm, a new technique that uses spectroscopic redshift data alone to identify the substructures and the surrounding groups of galaxy clusters, along with their member galaxies. Based on the estimated binding energy of galaxy pairs, the algorithm builds a binary tree that hierarchically arranges all of the galaxies in the field of view. The algorithm searches for buds, corresponding to gravitational potential minima on the binary tree branches; for each bud, the algorithm combines the number of galaxies, their velocity dispersion, and their average pairwise distance into a parameter that discriminates between the buds that do not correspond to any substructure or group, and thus eventually die, and the buds that correspond to substructures and groups, and thus bloom into the identified structures. We test our new algorithm with a sample of 300 mock redshift surveys of clusters in different dynamical states; the clusters are extracted from a large cosmological N-body simulation of a ΛCDM model. We limit our analysis to substructures and surrounding groups identified in the simulation with mass larger than 1013 h ‑1 M ⊙. With mock redshift surveys with 200 galaxies within 6 h ‑1 Mpc from the cluster center, the technique recovers 80% of the real substructures and 60% of the surrounding groups; in 57% of the identified structures, at least 60% of the member galaxies of the substructures and groups belong to the same real structure. These results improve by roughly a factor of two the performance of the best substructure identification algorithm currently available, the σ plateau algorithm, and suggest that our Blooming Tree Algorithm can be an invaluable tool for detecting substructures of galaxy clusters and investigating their complex dynamics.

  19. Experimental & Numerical Modeling of Non-combusting Model Firebrands' Transport

    NASA Astrophysics Data System (ADS)

    Tohidi, Ali; Kaye, Nigel

    2016-11-01

    Fire spotting is one of the major mechanisms of wildfire spread. Three phases of this phenomenon are firebrand formation and break-off from burning vegetation, lofting and downwind transport of firebrands through the velocity field of the wildfire, and spot fire ignition upon landing. The lofting and downwind transport phase is modeled by conducting large-scale wind tunnel experiments. Non-combusting rod-like model firebrands with different aspect ratios are released within the velocity field of a jet in a boundary layer cross-flow that approximates the wildfire velocity field. Characteristics of the firebrand dispersion are quantified by capturing the full trajectory of the model firebrands using the developed image processing algorithm. The results show that the lofting height has a direct impact on the maximum travel distance of the model firebrands. Also, the experimental results are utilized for validation of a highly scalable coupled stochastic & parametric firebrand flight model that, couples the LES-resolved velocity field of a jet-in-nonuniform-cross-flow (JINCF) with a 3D fully deterministic 6-degrees-of-freedom debris transport model. The validation results show that the developed numerical model is capable of estimating average statistics of the firebrands' flight. Authors would like to thank support of the National Science Foundation under Grant No. 1200560. Also, the presenter (Ali Tohid) would like to thank Dr. Michael Gollner from the University of Maryland College Park for the conference participation support.

  20. A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales

    NASA Astrophysics Data System (ADS)

    Elliott, Frank W.; Majda, Andrew J.

    1995-03-01

    A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.

  1. Collision detection for spacecraft proximity operations

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.; Bergmann, Edward V.; Walker, Bruce K.

    1991-01-01

    A new collision detection algorithm has been developed for use when two spacecraft are operating in the same vicinity. The two spacecraft are modeled as unions of convex polyhedra, where the resulting polyhedron many be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. Contacts between the vertices, faces, and edges of the polyhedra representing the two spacecraft are shown to occur when the value of one or more of a set of functions is zero. The collision detection algorithm is then formulated as a search for the zeros (roots) of these functions. Special properties of the functions for the assumed relative trajectory are exploited to expedite the zero search. The new algorithm is the first algorithm that can solve the collision detection problem exactly for relative motion with constant angular velocity. This is a significant improvement over models of rotational motion used in previous collision detection algorithms.

  2. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  3. Calibration of a rainfall-runoff hydrological model and flood simulation using data assimilation

    NASA Astrophysics Data System (ADS)

    Piacentini, A.; Ricci, S. M.; Thual, O.; Coustau, M.; Marchandise, A.

    2010-12-01

    Rainfall-runoff models are crucial tools for long-term assessment of flash floods or real-time forecasting. This work focuses on the calibration of a distributed parsimonious event-based rainfall-runoff model using data assimilation. The model combines a SCS-derived runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The SCS-derived runoff model is parametrized by the initial water deficit, the discharge coefficient for the soil reservoir and a lagged discharge coefficient. The Lag and Route routing model is parametrized by the velocity of travel and the lag parameter. These parameters are assumed to be constant for a given catchment except for the initial water deficit and the velocity travel that are event-dependent (landuse, soil type and moisture initial conditions). In the present work, a BLUE filtering technique was used to calibrate the initial water deficit and the velocity travel for each flood event assimilating the first available discharge measurements at the catchment outlet. The advantages of the BLUE algorithm are its low computational cost and its convenient implementation, especially in the context of the calibration of a reduced number of parameters. The assimilation algorithm was applied on two Mediterranean catchment areas of different size and dynamics: Gardon d'Anduze and Lez. The Lez catchment, of 114 km2 drainage area, is located upstream Montpellier. It is a karstic catchment mainly affected by floods in autumn during intense rainstorms with short Lag-times and high discharge peaks (up to 480 m3.s-1 in September 2005). The Gardon d'Anduze catchment, mostly granite and schistose, of 545 km2 drainage area, lies over the departements of Lozère and Gard. It is often affected by flash and devasting floods (up to 3000 m3.s-1 in September 2002). The discharge observations at the beginning of the flood event are assimilated so that the BLUE algorithm provides optimal values for the initial water deficit and the velocity travel before the flood peak. These optimal values are used for a new simulation of the event in forecast mode (under the assumption of perfect rain-fall). On both catchments, it was shown over a significant number of flood events, that the data assimilation procedure improves the flood peak forecast. The improvement is globally more important for the Gardon d'Anduze catchment where the flood events are stronger. The peak can be forecasted up to 36 hours head of time assimilating very few observations (up to 4) during the rise of the water level. For multiple peaks events, the assimilation of the observations from the first peak leads to a significant improvement of the second peak simulation. It was also shown that the flood rise is often faster in reality than it is represented by the model. In this case and when the flood peak is under estimated in the simulation, the use of the first observations can be misleading for the data assimilation algorithm. The careful estimation of the observation and background error variances enabled the satisfying use of the data assimilation in these complex cases even though it does not allow the model error correction.

  4. FPGA-based architecture for motion recovering in real-time

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  5. Analyzing angular distributions for two-step dissociation mechanisms in velocity map imaging.

    PubMed

    Straus, Daniel B; Butler, Lynne M; Alligood, Bridget W; Butler, Laurie J

    2013-08-15

    Increasingly, velocity map imaging is becoming the method of choice to study photoinduced molecular dissociation processes. This paper introduces an algorithm to analyze the measured net speed, P(vnet), and angular, β(vnet), distributions of the products from a two-step dissociation mechanism, where the first step but not the second is induced by absorption of linearly polarized laser light. Typically, this might be the photodissociation of a C-X bond (X = halogen or other atom) to produce an atom and a momentum-matched radical that has enough internal energy to subsequently dissociate (without the absorption of an additional photon). It is this second step, the dissociation of the unstable radicals, that one wishes to study, but the measured net velocity of the final products is the vector sum of the velocity imparted to the radical in the primary photodissociation (which is determined by taking data on the momentum-matched atomic cophotofragment) and the additional velocity vector imparted in the subsequent dissociation of the unstable radical. The algorithm allows one to determine, from the forward-convolution fitting of the net velocity distribution, the distribution of velocity vectors imparted in the second step of the mechanism. One can thus deduce the secondary velocity distribution, characterized by a speed distribution P(v1,2°) and an angular distribution I(θ2°), where θ2° is the angle between the dissociating radical's velocity vector and the additional velocity vector imparted to the product detected from the subsequent dissociation of the radical.

  6. MEMS SoC: observer-based coplanar gyro-free inertial measurement unit

    NASA Astrophysics Data System (ADS)

    Chen, Tsung-Lin; Park, Sungsu

    2005-09-01

    This paper presents a novel design of a coplanar gyro-free inertial measurement unit (IMU) that consists of seven to nine single-axis linear accelerometers, and it can be utilized to perform the six DOF measurements for an object in motion. Unlike other gyro-fee IMUs, this design uses redundant accelerometers and state estimation techniques to facilitate the in situ and mass fabrication for the employed accelerometers. The alignment error from positioning accelerometers onto a measurement unit and the fabrication cost of an IMU can greatly be reduced. The outputs of the proposed design are three linear accelerations and three angular velocities. As compared to other gyro-free IMUs, the proposed design uses less integral operation and thus improves its sensing resolution and drifting problem. The sensing resolution of a gyro-free IMU depends on the sensing resolution of the employed accelerometers as well as the size of the measurement unit. Simulation results indicate that the sensing resolution of the proposed design is 2° s-1 for the angular velocity and 10 μg for the linear acceleration when nine single-axis accelerometers, each with 10 μg sensing resolution, are deployed on a 4 inch diameter disc. Also, thanks to the iterative EKF algorithm, the angle estimation error is within 10-3 deg at 2 s.

  7. Robust, automatic GPS station velocities and velocity time series

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2014-12-01

    Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.

  8. Vision-Aided Inertial Navigation

    NASA Technical Reports Server (NTRS)

    Roumeliotis, Stergios I. (Inventor); Mourikis, Anastasios I. (Inventor)

    2017-01-01

    This document discloses, among other things, a system and method for implementing an algorithm to determine pose, velocity, acceleration or other navigation information using feature tracking data. The algorithm has computational complexity that is linear with the number of features tracked.

  9. Shear Elasticity and Shear Viscosity Imaging in Soft Tissue

    NASA Astrophysics Data System (ADS)

    Yang, Yiqun

    In this thesis, a new approach is introduced that provides estimates of shear elasticity and shear viscosity using time-domain measurements of shear waves in viscoelastic media. Simulations of shear wave particle displacements induced by an acoustic radiation force are accelerated significantly by a GPU. The acoustic radiation force is first calculated using the fast near field method (FNM) and the angular spectrum approach (ASA). The shear waves induced by the acoustic radiation force are then simulated in elastic and viscoelastic media using Green's functions. A parallel algorithm is developed to perform these calculations on a GPU, where the shear wave particle displacements at different observation points are calculated in parallel. The resulting speed increase enables rapid evaluation of shear waves at discrete points, in 2D planes, and for push beams with different spatial samplings and for different values of the f-number (f/#). The results of these simulations show that push beams with smaller f/# require a higher spatial sampling rate. The significant amount of acceleration achieved by this approach suggests that shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs. Shear wave elasticity imaging determines the mechanical parameters of soft tissue by analyzing measured shear waves induced by an acoustic radiation force. To estimate the shear elasticity value, the widely used time-of-flight method calculates the correlation between shear wave particle velocities at adjacent lateral observation points. Although this method provides accurate estimates of the shear elasticity in purely elastic media, our experience suggests that the time-of-flight (TOF) method consistently overestimates the shear elasticity values in viscoelastic media because the combined effects of diffraction, attenuation, and dispersion are not considered. To address this problem, we have developed an approach that directly accounts for all of these effects when estimating the shear elasticity. This new approach simulates shear wave particle velocities using a Green's function-based approach for the Voigt model, where the shear elasticity and viscosity values are estimated using an optimization-based approach that compares measured shear wave particle velocities with simulated shear wave particle velocities in the time-domain. The results are evaluated on a point-by-point basis to generate images. There is good agreement between the simulated and measured shear wave particle velocities, where the new approach yields much better images of the shear elasticity and shear viscosity than the TOF method. The new estimation approach is accelerated with an approximate viscoelastic Green's function model that is evaluated with shear wave data obtained from in vivo human livers. Instead of calculating shear waves with combinations of different shear elasticities and shear viscosities, shear waves are calculated with different shear elasticities on the GPU and then convolved with a viscous loss model, which accelerates the calculation dramatically. The shear elasticity and shear viscosity values are then estimated using an optimization-based approach by minimizing the difference between measured and simulated shear wave particle velocities. Shear elasticity and shear viscosity images are generated at every spatial point in a two-dimensional (2D) field-of-view (FOV). The new approach is applied to measured shear wave data obtained from in vivo human livers, and the results show that this new approach successfully generates shear elasticity and shear viscosity images from this data. The results also indicate that the shear elasticity values estimated with this approach are significantly smaller than the values estimated with the conventional TOF method and that the new approach demonstrates more consistent values for these estimates compared with the TOF method. This experience suggests that the new method is an effective approach for estimating the shear elasticity and the shear viscosity in liver and in other soft tissue.

  10. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less

  11. Detection and tracking of a moving target using SAR images with the particle filter-based track-before-detect algorithm.

    PubMed

    Gao, Han; Li, Jingwen

    2014-06-19

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.

  12. Detection and Tracking of a Moving Target Using SAR Images with the Particle Filter-Based Track-Before-Detect Algorithm

    PubMed Central

    Gao, Han; Li, Jingwen

    2014-01-01

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640

  13. The artificial object detection and current velocity measurement using SAR ocean surface images

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey

    2017-10-01

    Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.

  14. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  15. Dynamics of pairwise motions in the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.

    2016-10-01

    We present results of analysis of the dark matter (DM) pairwise velocity statistics in different Cosmic Web environments. We use the DM velocity and density field from the Millennium 2 simulation together with the NEXUS+ algorithm to segment the simulation volume into voxels uniquely identifying one of the four possible environments: nodes, filaments, walls or cosmic voids. We show that the PDFs of the mean infall velocities v 12 as well as its spatial dependence together with the perpendicular and parallel velocity dispersions bear a significant signal of the large-scale structure environment in which DM particle pairs are embedded. The pairwise flows are notably colder and have smaller mean magnitude in wall and voids, when compared to much denser environments of filaments and nodes. We discuss on our results, indicating that they are consistent with a simple theoretical predictions for pairwise motions as induced by gravitational instability mechanism. Our results indicate that the Cosmic Web elements are coherent dynamical entities rather than just temporal geometrical associations. In addition it should be possible to observationally test various Cosmic Web finding algorithms by segmenting available peculiar velocity data and studying resulting pairwise velocity statistics.

  16. Estimating net joint torques from kinesiological data using optimal linear system theory.

    PubMed

    Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F

    1995-12-01

    Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.

  17. CARA Status and Upcoming Enhancements

    NASA Technical Reports Server (NTRS)

    Johnson, Megan

    2017-01-01

    CAS 8.4.3 was deployed to operations on 13 June 2017. Discrepancies Between 3D Pc Estimates and advanced Monte Carlo Equinoctial-Sampling Pc Estimates discovered and discussed at 23 May 2017 Useras (Registered Trademark) Forum. The patch created the Reporting Pc, which is the greater value between the calculated 2D and 3D Pc values This changed the Pc reported in the CDMs to the Reporting Pc This changed the Pc reported on the Summary Report to the Reporting Pc This changed the Pc reported on Maneuver Screening Analysis (MSA) Report to the Reporting Pc. Both the 2D and 3D Pc added to the Summary Report details section The patch also updated the 3D Pc algorithm to eliminate velocity covariance from the Pc calculation This will bring 2D and 3D Pc into close alignment for vast majority of events, particularly for the events in which the 2D/3D discrepancy was found.

  18. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  19. Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture

    PubMed Central

    Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.

    2016-01-01

    Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857

  20. Experimental Evaluation of UWB Indoor Positioning for Sport Postures

    PubMed Central

    Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli

    2018-01-01

    Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267

  1. Estimating soil water content from ground penetrating radar coarse root reflections

    NASA Astrophysics Data System (ADS)

    Liu, X.; Cui, X.; Chen, J.; Li, W.; Cao, X.

    2016-12-01

    Soil water content (SWC) is an indispensable variable for understanding the organization of natural ecosystems and biodiversity. Especially in semiarid and arid regions, soil moisture is the plants primary source of water and largely determine their strategies for growth and survival, such as root depth, distribution and competition between them. Ground penetrating radar (GPR), a kind of noninvasive geophysical technique, has been regarded as an accurate tool for measuring soil water content at intermediate scale in past decades. For soil water content estimation with surface GPR, fixed antenna offset reflection method has been considered to have potential to obtain average soil water content between land surface and reflectors, and provide high resolution and few measurement time. In this study, 900MHz surface GPR antenna was used to estimate SWC with fixed offset reflection method; plant coarse roots (with diameters greater than 5 mm) were regarded as reflectors; a kind of advanced GPR data interpretation method, HADA (hyperbola automatic detection algorithm), was introduced to automatically obtain average velocity by recognizing coarse root hyperbolic reflection signals on GPR radargrams during estimating SWC. In addition, a formula was deduced to determine interval average SWC between two roots at different depths as well. We examined the performance of proposed method on a dataset simulated under different scenarios. Results showed that HADA could provide a reasonable average velocity to estimate SWC without knowledge of root depth and interval average SWC also be determined. When the proposed method was applied to estimation of SWC on a real-field measurement dataset, a very small soil water content vertical variation gradient about 0.006 with depth was captured as well. Therefore, the proposed method could be used to estimate average soil water content from ground penetrating radar coarse root reflections and obtain interval average SWC between two roots at different depths. It is very promising for measuring root-zone-soil-moisture and mapping soil moisture distribution around a shrub or even in field plot scale.

  2. In vivo lateral blood flow velocity measurement using speckle size estimation.

    PubMed

    Xu, Tiantian; Hozan, Mohsen; Bashford, Gregory R

    2014-05-01

    In previous studies, we proposed blood measurement using speckle size estimation, which estimates the lateral component of blood flow within a single image frame based on the observation that the speckle pattern corresponding to blood reflectors (typically red blood cells) stretches (i.e., is "smeared") if blood flow is in the same direction as the electronically controlled transducer line selection in a 2-D image. In this observational study, the clinical viability of ultrasound blood flow velocity measurement using speckle size estimation was investigated and compared with that of conventional spectral Doppler of carotid artery blood flow data collected from human patients in vivo. Ten patients (six male, four female) were recruited. Right carotid artery blood flow data were collected in an interleaved fashion (alternating Doppler and B-mode A-lines) with an Antares Ultrasound Imaging System and transferred to a PC via the Axius Ultrasound Research Interface. The scanning velocity was 77 cm/s, and a 4-s interval of flow data were collected from each subject to cover three to five complete cardiac cycles. Conventional spectral Doppler data were collected simultaneously to compare with estimates made by speckle size estimation. The results indicate that the peak systolic velocities measured with the two methods are comparable (within ±10%) if the scan velocity is greater than or equal to the flow velocity. When scan velocity is slower than peak systolic velocity, the speckle stretch method asymptotes to the scan velocity. Thus, the speckle stretch method is able to accurately measure pure lateral flow, which conventional Doppler cannot do. In addition, an initial comparison of the speckle size estimation and color Doppler methods with respect to computational complexity and data acquisition time indicated potential time savings in blood flow velocity estimation using speckle size estimation. Further studies are needed for calculation of the speckle stretch method across a field of view and combination with an appropriate axial flow estimator. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. Assimilating Eulerian and Lagrangian data in traffic-flow models

    NASA Astrophysics Data System (ADS)

    Xia, Chao; Cochrane, Courtney; DeGuire, Joseph; Fan, Gaoyang; Holmes, Emma; McGuirl, Melissa; Murphy, Patrick; Palmer, Jenna; Carter, Paul; Slivinski, Laura; Sandstede, Björn

    2017-05-01

    Data assimilation of traffic flow remains a challenging problem. One difficulty is that data come from different sources ranging from stationary sensors and camera data to GPS and cell phone data from moving cars. Sensors and cameras give information about traffic density, while GPS data provide information about the positions and velocities of individual cars. Previous methods for assimilating Lagrangian data collected from individual cars relied on specific properties of the underlying computational model or its reformulation in Lagrangian coordinates. These approaches make it hard to assimilate both Eulerian density and Lagrangian positional data simultaneously. In this paper, we propose an alternative approach that allows us to assimilate both Eulerian and Lagrangian data. We show that the proposed algorithm is accurate and works well in different traffic scenarios and regardless of whether ensemble Kalman or particle filters are used. We also show that the algorithm is capable of estimating parameters and assimilating real traffic observations and synthetic observations obtained from microscopic models.

  4. Assimilation of drifters' trajectories in velocity fields from coastal radar and model via the Lagrangian assimilation algorithm LAVA.

    NASA Astrophysics Data System (ADS)

    Berta, Maristella; Bellomo, Lucio; Griffa, Annalisa; Gatimu Magaldi, Marcello; Marmain, Julien; Molcard, Anne; Taillandier, Vincent

    2013-04-01

    The Lagrangian assimilation algorithm LAVA (LAgrangian Variational Analysis) is customized for coastal areas in the framework of the TOSCA (Tracking Oil Spills & Coastal Awareness network) Project, to improve the response to maritime accidents in the Mediterranean Sea. LAVA assimilates drifters' trajectories in the velocity fields which may come from either coastal radars or numerical models. In the present study, LAVA is applied to the coastal area in front of Toulon (France). Surface currents are available from a WERA radar network (2km spatial resolution, every 20 minutes) and from the GLAZUR model (1/64° spatial resolution, every hour). The cluster of drifters considered is constituted by 7 buoys, transmitting every 15 minutes for a period of 5 days. Three assimilation cases are considered: i) correction of the radar velocity field, ii) correction of the model velocity field and iii) reconstruction of the velocity field from drifters only. It is found that drifters' trajectories compare well with the ones obtained by the radar and the correction to radar velocity field is therefore minimal. Contrarily, observed and numerical trajectories separate rapidly and the correction to the model velocity field is substantial. For the reconstruction from drifters only, the velocity fields obtained are similar to the radar ones, but limited to the neighbor of the drifter paths.

  5. Relative velocity change measurement based on seismic noise analysis in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubuq, D.

    2011-12-01

    Passive monitoring techniques based on noise cross-correlation analysis are still debated in exploration geophysics even if recent studies showed impressive performance in seismology at larger scale. Time evolution of complex geological structure using noise data includes localization of noise sources and measurement of relative velocity variations. Monitoring relative velocity variations only requires the measurement of phase shifts of seismic noise cross-correlation functions computed for successive time recordings. The existing algorithms, such as the Stretching and the Doublet, classically need great efforts in terms of computation time, making them not practical when continuous dataset on dense arrays are acquired. We present here an innovative technique for passive monitoring based on the measure of the instantaneous phase of noise-correlated signals. The Instantaneous Phase Variation (IPV) technique aims at cumulating the advantages of the Stretching and Doublet methods while proposing a faster measurement of the relative velocity change. The IPV takes advantage of the Hilbert transform to compute in the time domain the phase difference between two noise correlation functions. The relative velocity variation is measured through the slope of the linear regression of the phase difference curve as a function of correlation time. The large amount of noise correlation functions, classically available at exploration scale on dense arrays, allows for a statistical analysis that further improves the precision of the estimation of the velocity change. In this work, numerical tests first aim at comparing the IPV performance to the Stretching and Doublet techniques in terms of accuracy, robustness and computation time. Then experimental results are presented using a seismic noise dataset with five days of continuous recording on 397 geophones spread on a ~1 km-squared area.

  6. Mapping conduction velocity of early embryonic hearts with a robust fitting algorithm

    PubMed Central

    Gu, Shi; Wang, Yves T; Ma, Pei; Werdich, Andreas A; Rollins, Andrew M; Jenkins, Michael W

    2015-01-01

    Cardiac conduction maturation is an important and integral component of heart development. Optical mapping with voltage-sensitive dyes allows sensitive measurements of electrophysiological signals over the entire heart. However, accurate measurements of conduction velocity during early cardiac development is typically hindered by low signal-to-noise ratio (SNR) measurements of action potentials. Here, we present a novel image processing approach based on least squares optimizations, which enables high-resolution, low-noise conduction velocity mapping of smaller tubular hearts. First, the action potential trace measured at each pixel is fit to a curve consisting of two cumulative normal distribution functions. Then, the activation time at each pixel is determined based on the fit, and the spatial gradient of activation time is determined with a two-dimensional (2D) linear fit over a square-shaped window. The size of the window is adaptively enlarged until the gradients can be determined within a preset precision. Finally, the conduction velocity is calculated based on the activation time gradient, and further corrected for three-dimensional (3D) geometry that can be obtained by optical coherence tomography (OCT). We validated the approach using published activation potential traces based on computer simulations. We further validated the method by adding artificially generated noise to the signal to simulate various SNR conditions using a curved simulated image (digital phantom) that resembles a tubular heart. This method proved to be robust, even at very low SNR conditions (SNR = 2-5). We also established an empirical equation to estimate the maximum conduction velocity that can be accurately measured under different conditions (e.g. sampling rate, SNR, and pixel size). Finally, we demonstrated high-resolution conduction velocity maps of the quail embryonic heart at a looping stage of development. PMID:26114034

  7. Shear-wave velocity model from Rayleigh wave group velocities centered on the Sacramento/San Joaquin Delta

    USGS Publications Warehouse

    Fletcher, Jon Peter B.; Erdem, Jemile

    2017-01-01

    Rayleigh wave group velocities obtained from ambient noise tomography are inverted for an upper crustal model of the Central Valley, California, centered on the Sacramento/San Joaquin Delta. Two methods were tried; the first uses SURF96, a least-squares routine. It provides a good fit to the data, but convergence is dependent on the starting model. The second uses a genetic algorithm, whose starting model is random. This method was tried at several nodes in the model and compared to the output from SURF96. The genetic code is run five times and the variance of the output of all five models can be used to obtain an estimate of error. SURF96 produces a more regular solution mostly because it is typically run with a smoothing constraint. Models from the genetic code are generally consistent with the SURF96 code sometimes producing lower velocities at depth. The full model, calculated using SURF96, employed a 2-pass strategy, which used a variable damping scheme in the first pass. The resulting model shows low velocities near the surface in the Central Valley with a broad asymmetrical sedimentary basin located close to the western edge of the Central Valley near 122°W longitude. At shallow depths the Rio Vista Basin is found nestled between the Pittsburgh/Kirby Hills and Midland faults, but a significant basin also seems to exist to the west of the Kirby Hills fault. There are other possible correlations between fast and slow velocities in the Central Valley and geologic features such as the Stockton Arch, oil or gas producing regions and the fault-controlled western boundary of the Central Valley.

  8. Shear-wave Velocity Model from Rayleigh Wave Group Velocities Centered on the Sacramento/San Joaquin Delta

    NASA Astrophysics Data System (ADS)

    Fletcher, Jon B.; Erdem, Jemile

    2017-10-01

    Rayleigh wave group velocities obtained from ambient noise tomography are inverted for an upper crustal model of the Central Valley, California, centered on the Sacramento/San Joaquin Delta. Two methods were tried; the first uses SURF96, a least squares routine. It provides a good fit to the data, but convergence is dependent on the starting model. The second uses a genetic algorithm, whose starting model is random. This method was tried at several nodes in the model and compared to the output from SURF96. The genetic code is run five times and the variance of the output of all five models can be used to obtain an estimate of error. SURF96 produces a more regular solution mostly because it is typically run with a smoothing constraint. Models from the genetic code are generally consistent with the SURF96 code sometimes producing lower velocities at depth. The full model, calculated using SURF96, employed a 2-pass strategy, which used a variable damping scheme in the first pass. The resulting model shows low velocities near the surface in the Central Valley with a broad asymmetrical sedimentary basin located close to the western edge of the Central Valley near 122°W longitude. At shallow depths, the Rio Vista Basin is found nestled between the Pittsburgh/Kirby Hills and Midland faults, but a significant basin also seems to exist to the west of the Kirby Hills fault. There are other possible correlations between fast and slow velocities in the Central Valley and geologic features such as the Stockton Arch, oil or gas producing regions and the fault-controlled western boundary of the Central Valley.

  9. CELFE: Coupled Eulerian-Lagrangian Finite Element program for high velocity impact. Part 1: Theory and formulation. [hydroelasto-viscoplastic model

    NASA Technical Reports Server (NTRS)

    Lee, C. H.

    1978-01-01

    A 3-D finite element program capable of simulating the dynamic behavior in the vicinity of the impact point, together with predicting the dynamic response in the remaining part of the structural component subjected to high velocity impact is discussed. The finite algorithm is formulated in a general moving coordinate system. In the vicinity of the impact point contained by a moving failure front, the relative velocity of the coordinate system will approach the material particle velocity. The dynamic behavior inside the region is described by Eulerian formulation based on a hydroelasto-viscoplastic model. The failure front which can be regarded as the boundary of the impact zone is described by a transition layer. The layer changes the representation from the Eulerian mode to the Lagrangian mode outside the failure front by varying the relative velocity of the coordinate system to zero. The dynamic response in the remaining part of the structure described by the Lagrangian formulation is treated using advanced structural analysis. An interfacing algorithm for coupling CELFE with NASTRAN is constructed to provide computational capabilities for large structures.

  10. An oscillation-free flow solver based on flux reconstruction

    NASA Astrophysics Data System (ADS)

    Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.

    2018-07-01

    In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.

  11. A universal approach to determine footfall timings from kinematics of a single foot marker in hoofed animals

    PubMed Central

    Clayton, Hilary M.

    2015-01-01

    The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641

  12. Motion estimation accuracy for visible-light/gamma-ray imaging fusion for portable portal monitoring

    NASA Astrophysics Data System (ADS)

    Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Gee, Timothy F.

    2010-01-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Portable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest. We have constructed a prototype, rapid-deployment portal gamma-ray imaging portal monitor that uses machine vision and gamma-ray imaging to monitor multiple lanes of traffic. Vehicles are detected and tracked by using point detection and optical flow methods as implemented in the OpenCV software library. Points are clustered together but imperfections in the detected points and tracks cause errors in the accuracy of the vehicle position estimates. The resulting errors cause a "blurring" effect in the gamma image of the vehicle. To minimize these errors, we have compared a variety of motion estimation techniques including an estimate using the median of the clustered points, a "best-track" filtering algorithm, and a constant velocity motion estimation model. The accuracy of these methods are contrasted and compared to a manually verified ground-truth measurement by quantifying the rootmean- square differences in the times the vehicles cross the gamma-ray image pixel boundaries compared with a groundtruth manual measurement.

  13. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  14. Hierarchical information fusion for global displacement estimation in microsensor motion capture.

    PubMed

    Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong

    2013-07-01

    This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.

  15. Real-time Upstream Monitoring System (RUMS): Forecasting arrival times of interplanetary shocks using energetic particle data from ACE

    NASA Astrophysics Data System (ADS)

    Ho, G.; Donegan, M.; Vandegriff, J.; Wagstaff, K.

    We have created a system for predicting the arrival times at Earth of interplanetary (IP) shocks that originate at the Sun. This system is currently available on the web (http://sd-www.jhuapl.edu/UPOS/RISP/index.html) and runs in real-time. Input data to our prediction algorithm is energetic particle data from the Electron, Proton, and Alpha Monitor (EPAM) instrument on NASA's Advanced Composition Explorer (ACE) spacecraft. Real-time EPAM data is obtained from the National Oceanic and Atmospheric Administration (NOAA) Space Environment Center (SEC). Our algorithm operates in two stages. First it watches for a velocity dispersion signature (energetic ions show flux enhancement followed by subsequent enhancements in lower energies), which is commonly seen upstream of a large IP shock. Once a precursor signature has been detected, a pattern recognition algorithm is used to analyze the time series profile of the particle data and generate an estimate for the shock arrival time. Tests on the algorithm show an average error of roughly 9 hours for predictions made 24 hours before the shock arrival and roughly 5 hours when the shock is 12 hours away. This can provide significant lead-time and deliver critical information to mission planners, satellite operations controllers, and scientists. As of February 4, 2004, the ACE real-time stream has been switched to include data from another detector on EPAM. We are now processing the new real-time data stream and have made improvements to our algorithm based on this data. In this paper, we report prediction results from the updated algorithm.

  16. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  17. Field Testing of an In-well Point Velocity Probe for the Rapid Characterization of Groundwater Velocity

    NASA Astrophysics Data System (ADS)

    Osorno, T.; Devlin, J. F.

    2017-12-01

    Reliable estimates of groundwater velocity is essential in order to best implement in-situ monitoring and remediation technologies. The In-well Point Velocity Probe (IWPVP) is an inexpensive, reusable tool developed for rapid measurement of groundwater velocity at the centimeter-scale in monitoring wells. IWPVP measurements of groundwater speed are based on a small-scale tracer test conducted as ambient groundwater passes through the well screen and the body of the probe. Horizontal flow direction can be determined from the difference in tracer mass passing detectors placed in four funnel-and-channel pathways through the probe, arranged in a cross pattern. The design viability of the IWPVP was confirmed using a two-dimensional numerical model in Comsol Multiphysics, followed by a series of laboratory tank experiments in which IWPVP measurements were calibrated to quantify seepage velocities in both fine and medium sand. Lab results showed that the IWPVP was capable of measuring the seepage velocity in less than 20 minutes per test, when the seepage velocity was in the range of 0.5 to 4.0 m/d. Further, the IWPVP estimated the groundwater speed with a precision of ± 7%, and an accuracy of ± 14%, on average. The horizontal flow direction was determined with an accuracy of ± 15°, on average. Recently, a pilot field test of the IWPVP was conducted in the Borden aquifer, C.F.B. Borden, Ontario, Canada. A total of approximately 44 IWPVP tests were conducted within two 2-inch groundwater monitoring wells comprising a 5 ft. section of #8 commercial well screen. Again, all tests were completed in under 20 minutes. The velocities estimated from IWPVP data were compared to 21 Point Velocity Probe (PVP) tests, as well as Darcy-based estimates of groundwater velocity. Preliminary data analysis shows strong agreement between the IWPVP and PVP estimates of groundwater velocity. Further, both the IWPVP and PVP estimates of groundwater velocity appear to be reasonable when compared to a Darcy-based estimate of groundwater velocity, using the range of hydraulic conductivity values previously reported at the Borden aquifer. Based on these promising results, the IWPVP appears to be a viable tool for the determination of groundwater velocity at the centimeter-scale.

  18. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  19. FORTRAN program for analyzing ground-based radar data: Usage and derivations, version 6.2

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.; Whitmore, Stephen A.

    1995-01-01

    A postflight FORTRAN program called 'radar' reads and analyzes ground-based radar data. The output includes position, velocity, and acceleration parameters. Air data parameters are also provided if atmospheric characteristics are input. This program can read data from any radar in three formats. Geocentric Cartesian position can also be used as input, which may be from an inertial navigation or Global Positioning System. Options include spike removal, data filtering, and atmospheric refraction corrections. Atmospheric refraction can be corrected using the quick White Sands method or the gradient refraction method, which allows accurate analysis of very low elevation angle and long-range data. Refraction properties are extrapolated from surface conditions, or a measured profile may be input. Velocity is determined by differentiating position. Accelerations are determined by differentiating velocity. This paper describes the algorithms used, gives the operational details, and discusses the limitations and errors of the program. Appendices A through E contain the derivations for these algorithms. These derivations include an improvement in speed to the exact solution for geodetic altitude, an improved algorithm over earlier versions for determining scale height, a truncation algorithm for speeding up the gradient refraction method, and a refinement of the coefficients used in the White Sands method for Edwards AFB, California. Appendix G contains the nomenclature.

  20. Magnetic resonance elastography of the kidneys: feasibility and reproducibility in young healthy adults.

    PubMed

    Rouvière, Olivier; Souchon, Rémi; Pagnoux, Gaële; Ménager, Jean-Michel; Chapelon, Jean-Yves

    2011-10-01

    To evaluate the feasibility and reproducibility of renal magnetic resonance elastography (MRE) in young healthy volunteers. Ten volunteers underwent renal MRE twice at a 4-5 week interval. The vibrations (45 and 76 Hz) were generated by a speaker positioned beneath the volunteers' back and centered on their left kidney. For each frequency, three sagittal slices were acquired (eight phase offsets per cycle, motion-encoding gradients successively positioned along the three directions of space). Shear velocity images were reconstructed using the curl operator combined with the local frequency estimation (LFE) algorithm. The mean shear velocities measured in the renal parenchyma during the two examinations were not significantly different and exhibited a mean variation of 6% at 45 Hz and 76 Hz. The mean shear velocities in renal parenchyma were 2.21 ± 0.14 m/s at 45 Hz (shear modulus of 4.9 ± 0.5 kPa) and 3.07 ± 0.17 m/s at 76 Hz (9.4 ± 0.8 kPa, P < 0.01). The mean shear velocities in the renal cortex and medulla were respectively 2.19 ± 0.13 m/s and 2.32 ± 0.16 m/s at 45 Hz (P = 0.002) and 3.06 ± 0.16 m/s and 3.10 ± 0.22 m/s at 76 Hz (P = 0.13). Renal MRE was feasible and reproducible. Two independent measurements of shear velocities in the renal parenchyma of the same subjects showed an average variability of 6%. Copyright © 2011 Wiley-Liss, Inc.

  1. Slip Ratio Estimation and Regenerative Brake Control for Decelerating Electric Vehicles without Detection of Vehicle Velocity and Acceleration

    NASA Astrophysics Data System (ADS)

    Suzuki, Toru; Fujimoto, Hiroshi

    In slip ratio control systems, it is necessary to detect the vehicle velocity in order to obtain the slip ratio. However, it is very difficult to measure this velocity directly. We have proposed slip ratio estimation and control methods that do not require the vehicle velocity with acceleration. In this paper, the slip ratio estimation and control methods are proposed without detecting the vehicle velocity and acceleration when it is decelerating. We carried out simulations and experiments by using an electric vehicle to verify the effectiveness of the proposed method.

  2. Estimating the Wet-Rock P-Wave Velocity from the Dry-Rock P-Wave Velocity for Pyroclastic Rocks

    NASA Astrophysics Data System (ADS)

    Kahraman, Sair; Fener, Mustafa; Kilic, Cumhur Ozcan

    2017-07-01

    Seismic methods are widely used for the geotechnical investigations in volcanic areas or for the determination of the engineering properties of pyroclastic rocks in laboratory. Therefore, developing a relation between the wet- and dry-rock P-wave velocities will be helpful for engineers when evaluating the formation characteristics of pyroclastic rocks. To investigate the predictability of the wet-rock P-wave velocity from the dry-rock P-wave velocity for pyroclastic rocks P-wave velocity measurements were conducted on 27 different pyroclastic rocks. In addition, dry-rock S-wave velocity measurements were conducted. The test results were modeled using Gassmann's and Wood's theories and it was seen that estimates for saturated P-wave velocity from the theories fit well measured data. For samples having values of less and greater than 20%, practical equations were derived for reliably estimating wet-rock P-wave velocity as function of dry-rock P-wave velocity.

  3. Stress wave velocity patterns in the longitudinal-radial plane of trees for defect diagnosis

    Treesearch

    Guanghui Li; Xiang Weng; Xiaocheng Du; Xiping Wang; Hailin Feng

    2016-01-01

    Acoustic tomography for urban tree inspection typically uses stress wave data to reconstruct tomographic images for the trunk cross section using interpolation algorithm. This traditional technique does not take into account the stress wave velocity patterns along tree height. In this study, we proposed an analytical model for the wave velocity in the longitudinal–...

  4. Flight Path Synthesis and HUD Scaling for V/STOL Terminal Area Operations

    DOT National Transportation Integrated Search

    1995-04-01

    A two circle horizontal flightpath synthesis algorithm for Vertical/Short : Takeoff and Landing (V/STOL) terminal area operations is presented. This : algorithm provides a flight-path that is tangential to the aircraft's velocity : vector at the inst...

  5. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Within times of the order of ten seconds from the earthquake origin, the information about the area where moderate to strong ground shaking is expected to occur, can be sent to inner and outer sites, allowing the activation of emergency measurements to protect people , secure industrial facilities and optimize the site resilience after the disaster. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. In QuakeUp, the P-wave parameters are continuously measured, using progressively expanded P-wave time windows, and providing evolutionary and reliable estimates of the ground shaking distribution, especially in the case of very large events. Furthermore, to minimize the S-wave contamination on the P-wave signal portion, an efficient algorithm, based on the real-time polarization analysis of the three-component seismogram, for the automatic detection of the S-wave arrival time has been included. The final output of QuakeUp will be an automatic alert message that is transmitted to sites to be secured during the earthquake emergency. The message contains all relevant information about the expected potential damage at the site and the time available for security actions (lead-time) after the warning. A global view of the system performance during and after the event (in play-back mode) is obtained through an end-user visual display, where the most relevant pieces of information will be displayed and updated as soon as new data are available. The software platform Quake-Up is essentially aimed at improving the reliability and the accuracy in terms of parameter estimation, minimizing the uncertainties in the real-time estimations without losing the essential requirements of speediness and robustness, which are needed to activate rapid emergency actions.

  6. A comparison of kinematic algorithms to estimate gait events during overground running.

    PubMed

    Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher

    2015-01-01

    The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Bayesian microsaccade detection

    PubMed Central

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  8. Orientation estimation algorithm applied to high-spin projectiles

    NASA Astrophysics Data System (ADS)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  9. Water-escape velocities in jumping blacktip sharks

    PubMed Central

    Brunnschweiler, Juerg M

    2005-01-01

    This paper describes the first determination of water-escape velocities in free-ranging sharks. Two approximations are used to estimate the final swimming speed at the moment of penetrating the water surface. Blacktip sharks were videotaped from below the surface and parameters were estimated by analysing the sequences frame by frame. Water-escape velocities averaged 6.3 m s−1. These velocities for blacktip sharks seem accurate and are similar to estimates obtained for other shark species of similar size. PMID:16849197

  10. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  11. Evaluation of multiple tracer methods to estimate low groundwater flow velocities.

    PubMed

    Reimus, Paul W; Arnold, Bill W

    2017-04-01

    Four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or "shut-in" periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity data are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a "ground truth" velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. The advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them are discussed. Published by Elsevier B.V.

  12. The use of the multiwavelet transform for the estimation of surface wave group and phase velocities and their associated uncertainties

    NASA Astrophysics Data System (ADS)

    Poppeliers, C.; Preston, L. A.

    2017-12-01

    Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.

  13. India plate angular velocity and contemporary deformation rates from continuous GPS measurements from 1996 to 2015.

    PubMed

    Jade, Sridevi; Shrungeshwara, T S; Kumar, Kireet; Choudhury, Pallabee; Dumka, Rakesh K; Bhu, Harsh

    2017-09-12

    We estimate a new angular velocity for the India plate and contemporary deformation rates in the plate interior and along its seismically active margins from Global Positioning System (GPS) measurements from 1996 to 2015 at 70 continuous and 3 episodic stations. A new India-ITRF2008 angular velocity is estimated from 30 GPS sites, which include stations from western and eastern regions of the plate interior that were unrepresented or only sparsely sampled in previous studies. Our newly estimated India-ITRF2008 Euler pole is located significantly closer to the plate with ~3% higher angular velocity than all previous estimates and thus predicts more rapid variations in rates and directions along the plate boundaries. The 30 India plate GPS site velocities are well fit by the new angular velocity, with north and east RMS misfits of only 0.8 and 0.9 mm/yr, respectively. India fixed velocities suggest an approximate of 1-2 mm/yr intra-plate deformation that might be concentrated along regional dislocations, faults in Peninsular India, Kachchh and Indo-Gangetic plain. Relative to our newly-defined India plate frame of reference, the newly estimated velocities for 43 other GPS sites along the plate margins give insights into active deformation along India's seismically active northern and eastern boundaries.

  14. Robust Control Algorithm for a Two Cart System and an Inverted Pendulum

    NASA Technical Reports Server (NTRS)

    Wilson, Chris L.; Capo-Lugo, Pedro

    2011-01-01

    The Rectilinear Control System can be used to simulate a launch vehicle during liftoff. Several control schemes have been developed that can control different dynamic models of the rectilinear plant. A robust control algorithm was developed that can control a pendulum to maintain an inverted position. A fluid slosh tank will be attached to the pendulum in order to test robustness in the presence of unknown slosh characteristics. The rectilinear plant consists of a DC motor and three carts mounted in series. Each cart s weight can be adjusted with brass masses and the carts can be coupled with springs. The pendulum is mounted on the first cart and an adjustable air damper can be attached to the third cart if desired. Each cart and the pendulum have a quadrature encoder to determine position. Full state feedback was implemented in order to develop the control algorithm along with a state estimator to determine the velocity states of the system. A MATLAB program was used to convert the state space matrices from continuous time to discrete time. This program also used a desired phase margin and damping ratio to determine the feedback gain matrix that would be used in the LabVIEW program. This experiment will allow engineers to gain a better understanding of liquid propellant slosh dynamics, therefore enabling them to develop more robust control algorithms for launch vehicle systems

  15. Reality Check Algorithm for Complex Sources in Early Warning

    NASA Astrophysics Data System (ADS)

    Karakus, G.; Heaton, T. H.

    2013-12-01

    In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.

  16. Joint inversion of phase velocity dispersion and H/V ratio curves from seismic noise recordings using a genetic algorithm, considering higher modes

    NASA Astrophysics Data System (ADS)

    Parolai, S.; Picozzi, M.; Richwalski, S. M.; Milkereit, C.

    2005-01-01

    Seismic noise contains information on the local S-wave velocity structure, which can be obtained from the phase velocity dispersion curve by means of array measurements. The H/V ratio from single stations also contains information on the average S-wave velocity and the total thickness of the sedimentary cover. A joint inversion of the two data sets therefore might allow constraining the final model well. We propose a scheme that does not require a starting model because of usage of a genetic algorithm. Furthermore, we tested two suitable cost functions for our data set, using a-priori and data driven weighting. The latter one was more appropriate in our case. In addition, we consider the influence of higher modes on the data sets and use a suitable forward modeling procedure. Using real data we show that the joint inversion indeed allows for better fitting the observed data than using the dispersion curve only.

  17. Seismic Tomography of the Sacramento -- San Joaquin River Delta: Joint P-wave/Gravity and Ambient Noise Methods

    NASA Astrophysics Data System (ADS)

    Teel, Alexander C.

    The Sacramento -- San Joaquin River Delta (SSJRD) is an area that has been identified as having high seismic hazard but has resolution gaps in the seismic velocity models of the area due to a scarcity of local seismic stations and earthquakes. I present new three-dimensional (3D) P-wave velocity (Vp) and S-wave velocity (Vs) models for the SSJRD which fill in the sampling gaps of previous studies. I have created a new 3D seismic velocity model for the SSJRD, addressing an identified need for higher resolution velocity models in the region, using a new joint gravity/body-wave tomography algorithm. I am able to fit gravity and arrival-time residuals jointly using an empirical density-velocity relationship to take advantage of existing gravity data in the region to help fill in the resolution gaps of previous velocity models in the area. I find that the method enhances the ability to resolve the relief of basin structure relative to seismic-only tomography at this location. I find the depth to the basement to be the greatest in the northwest portion of the SSJRD and that there is a plateau in the basement structure beneath the southeast portion of the SSJRD. From my findings I infer that the SSJRD may be prone to focusing effects and basin amplification of ground motion. A 3D, Vs model for the SSJRD and surrounding area was created using ambient noise tomography. The empirical Green's functions are in good agreement with published cross-correlations and match earthquake waveforms sharing similar paths. The group velocity and shear velocity maps are in good agreement with published regional scale models. The new model maps velocity values on a local scale and successfully recovers the basin structure beneath the Delta. From this Vs model I find the maximum depth of the basin to reach approximately 15 km with the Great Valley Ophiolite body rising to a depth of 10 km east of the SSJRD. We consider our basement-depth estimates from the Vp model to be more robust than from the Vs model.

  18. Estimation of velocity structure around a natural gas reservoir at Yufutsu, Japan, by microtremor survey

    NASA Astrophysics Data System (ADS)

    Shiraishi, H.; Asanuma, H.; Tezuka, K.

    2010-12-01

    Seismic reflection survey has been commonly used for exploration and time-lapse monitoring of oil/gas resources. Seismic reflection images typically have reasonable reliability and resolution for commercial production. However, cost consideration sometimes avoids deployment of widely distributed array or repeating survey in cases of time lapse monitoring or exploration of small-scale reservoir. Hence, technologies to estimate structures and physical properties around the reservoir with limited cost would be effectively used. Microtremor survey method (MSM) has an ability to realize long-term monitoring of reservoir with low cost, because this technique has a passive nature and minimum numbers of the monitoring station is four. MSM has been mainly used for earthquake disaster prevention, because velocity structure of S-wave is directly estimated from velocity dispersion of the Rayleigh wave. The authors experimentally investigated feasibility of the MSM survey for exploration of oil/gas reservoir. The field measurement was carried out around natural gas reservoir at Yufutsu, Hokkaido, Japan. Four types of arrays with array radii of 30m, 100m, 300m and 600m are deployed in each area. Dispersion curves of the velocity of Rayleigh wave were estimated from observed microtremors, and S-wave velocity structures were estimated by an inverse analysis of the dispersion curves with genetic algorism (GA). The estimated velocity structures showed good consistency with one dimensional velocity structure by previous reflection surveys up to 4-5 km. We also found from the field experiment that a data of 40min is effective to estimate the velocity structure even the seismometers are deployed along roads with heavy traffic.

  19. Development of a Single Station 6C-Approach for Array Analysis and Microzonation: Using Vertical Rotation Rate to Estimate Love-Wave Disperion Curves and Direction Finding

    NASA Astrophysics Data System (ADS)

    Wassermann, J. M.; Wietek, A.; Hadziioannou, C.; Igel, H.

    2014-12-01

    Microzonation, i.e. the estimation of (shear) wave velocity profiles of the upper few 100m in dense 2D surface grids is one of the key methods to understand the variation in seismic hazard caused by ground shaking events. In this presentation we introduce a novel method for estimating the Love-wave phase velocity dispersion by using ambient noise recordings. We use the vertical component of rotational motions inherently present in ambient noise and the well established relation to simultaneous recordings of transverse acceleration. In this relation the frequency dependent phase velocity of a plane SH (or Love)-type wave acts as a proportionality factor between the anti-correlated amplitudes of both measures. In a first step we used synthetic data sets with increasing complexity to evaluate the proposed technique and the developed algorithm to extract the direction and amplitude of the incoming ambient noise wavefield measured at a single site. Since reliable weak rotational motion sensors are not yet readily available, we apply array derived rotation measurements in order to test our method. We next use the technique to analyze different real data sets of ambient noise measurements as well as seismic recordings at active volcanoes and compare these results with findings of the Spatial AutoCorrelation technique which was applied to the same data set. We demonstrate that the newly developed technique shows comparable results to more classical, strictly array based methods. Furthermore, we show that as soon as portable weak motion rotational motion sensors are available, a single 6C-station approach will be feasible, not only for microzonation but also for general array applications, with performance comparable to more classical techniques. An important advantage, especially in urban environments, is that with this approach, the number of seismic stations needed is drastically reduced.

  20. UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2017-01-01

    This paper documents a study that drove the development of a mathematical expression in the minimum operational performance standards (MOPS) of detect-and-avoid (DAA) systems for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance could be provided during recovery of well clear separation with a non-cooperative VFR aircraft in addition to horizontal maneuver guidance. Although suppressing vertical maneuver guidance in these situations increased the minimum horizontal separation from 500 to 800 feet, the maximum severity of loss of well clear increased in about 35 of the encounters compared to when a vertical maneuver was preferred and allowed. Additionally, analysis of individual cases led to the identification of a class of encounter where vertical rate error had a large effect on horizontal maneuvers due to the difficulty of making the correct left-right turn decision: crossing conflict with intruder changing altitude. These results supported allowing vertical maneuvers when UAS vertical performance exceeds the relative vertical position and velocity accuracy of the DAA tracker given the current velocity of the UAS and the relative vertical position and velocity estimated by the DAA tracker. Looking ahead, these results indicate a need to improve guidance algorithms by utilizing maneuver stability and near mid-air collision risk when determining maneuver guidance to regain well clear separation.

  1. Simulations of Dissipative Circular Restricted Three-body Problems Using the Velocity-scaling Correction Method

    NASA Astrophysics Data System (ADS)

    Wang, Shoucheng; Huang, Guoqing; Wu, Xin

    2018-02-01

    In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.

  2. Online Wavelet Complementary velocity Estimator.

    PubMed

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Algorithm refinement for stochastic partial differential equations: II. Correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2005-08-10

    We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less

  4. A Strapdown Interial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter

    PubMed Central

    Gao, Wei; Zhang, Ya; Wang, Jianguo

    2014-01-01

    The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842

  5. An artificial neural network to discover hypervelocity stars: candidates in Gaia DR1/TGAS

    NASA Astrophysics Data System (ADS)

    Marchetti, T.; Rossi, E. M.; Kordopatis, G.; Brown, A. G. A.; Rimoldi, A.; Starkenburg, E.; Youakim, K.; Ashley, R.

    2017-09-01

    The paucity of hypervelocity stars (HVSs) known to date has severely hampered their potential to investigate the stellar population of the Galactic Centre and the Galactic potential. The first Gaia data release (DR1, 2016 September 14) gives an opportunity to increase the current sample. The challenge is the disparity between the expected number of HVSs and that of bound background stars. We have applied a novel data mining algorithm based on machine learning techniques, an artificial neural network, to the Tycho-Gaia astrometric solution catalogue. With no pre-selection of data, we could exclude immediately ˜99 per cent of the stars in the catalogue and find 80 candidates with more than 90 per cent predicted probability to be HVSs, based only on their position, proper motions and parallax. We have cross-checked our findings with other spectroscopic surveys, determining radial velocities for 30 and spectroscopic distances for five candidates. In addition, follow-up observations have been carried out at the Isaac Newton Telescope for 22 stars, for which we obtained radial velocities and distance estimates. We discover 14 stars with a total velocity in the Galactic rest frame >400 km s-1, and five of these have a probability of >50 per cent of being unbound from the Milky Way. Tracing back their orbits in different Galactic potential models, we find one possible unbound HVS with v ˜ 520 km s-1, five bound HVSs and, notably, five runaway stars with median velocity between 400 and 780 km s-1. At the moment, uncertainties in the distance estimates and ages are too large to confirm the nature of our candidates by narrowing down their ejection location, and we wait for future Gaia releases to validate the quality of our sample. This test successfully demonstrates the feasibility of our new data-mining routine.

  6. The crustal thickness of Australia

    USGS Publications Warehouse

    Clitheroe, G.; Gudmundsson, O.; Kennett, B.L.N.

    2000-01-01

    We investigate the crustal structure of the Australian continent using the temporary broadband stations of the Skippy and Kimba projects and permanent broadband stations. We isolate near-receiver information, in the form of crustal P-to-S conversions, using the receiver function technique. Stacked receiver functions are inverted for S velocity structure using a Genetic Algorithm approach to Receiver Function Inversion (GARFI). From the resulting velocity models we are able to determine the Moho depth and to classify the width of the crust-mantle transition for 65 broadband stations. Using these results and 51 independent estimates of crustal thickness from refraction and reflection profiles, we present a new, improved, map of Moho depth for the Australian continent. The thinnest crust (25 km) occurs in the Archean Yilgarn Craton in Western Australia; the thickest crust (61 km) occurs in Proterozoic central Australia. The average crustal thickness is 38.8 km (standard deviation 6.2 km). Interpolation error estimates are made using kriging and fall into the range 2.5-7.0 km. We find generally good agreement between the depth to the seismologically defined Moho and xenolith-derived estimates of crustal thickness beneath northeastern Australia. However, beneath the Lachlan Fold Belt the estimates are not in agreement, and it is possible that the two techniques are mapping differing parts of a broad Moho transition zone. The Archean cratons of Western Australia appear to have remained largely stable since cratonization, reflected in only slight variation of Moho depth. The largely Proterozoic center of Australia shows relatively thicker crust overall as well as major Moho offsets. We see evidence of the margin of the contact between the Precambrian craton and the Tasman Orogen, referred to as the Tasman Line. Copyright 2000 by the American Geophysical Union.

  7. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations inmore » air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.« less

  8. Impact source localisation in aerospace composite structures

    NASA Astrophysics Data System (ADS)

    De Simone, Mario Emanuele; Ciampa, Francesco; Boccardi, Salvatore; Meo, Michele

    2017-12-01

    The most commonly encountered type of damage in aircraft composite structures is caused by low-velocity impacts due to foreign objects such as hail stones, tool drops and bird strikes. Often these events can cause severe internal material damage that is difficult to detect and may lead to a significant reduction of the structure’s strength and fatigue life. For this reason there is an urgent need to develop structural health monitoring systems able to localise low-velocity impacts in both metallic and composite components as they occur. This article proposes a novel monitoring system for impact localisation in aluminium and composite structures, which is able to determine the impact location in real-time without a-priori knowledge of the mechanical properties of the material. This method relies on an optimal configuration of receiving sensors, which allows linearization of well-known nonlinear systems of equations for the estimation of the impact location. The proposed algorithm is based on the time of arrival identification of the elastic waves generated by the impact source using the Akaike Information Criterion. The proposed approach was demonstrated successfully on both isotropic and orthotropic materials by using a network of closely spaced surface-bonded piezoelectric transducers. The results obtained show the validity of the proposed algorithm, since the impact sources were detected with a high level of accuracy. The proposed impact detection system overcomes current limitations of other methods and can be retrofitted easily on existing aerospace structures allowing timely detection of an impact event.

  9. A proposed method to estimate premorbid full scale intelligence quotient (FSIQ) for the Canadian Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) using demographic and combined estimation procedures.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H

    2007-11-01

    Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.

  10. Dense Velocity Field of Turkey

    NASA Astrophysics Data System (ADS)

    Ozener, H.; Aktug, B.; Dogru, A.; Tasci, L.

    2017-12-01

    While the GNSS-based crustal deformation studies in Turkey date back to early 1990s, a homogenous velocity field utilizing all the available data is still missing. Regional studies employing different site distributions, observation plans, processing software and methodology not only create reference frame variations but also heterogeneous stochastic models. While the reference frame effect between different velocity fields could easily be removed by estimating a set of rotations, the homogenization of the stochastic models of the individual velocity fields requires a more detailed analysis. Using a rigorous Variance Component Estimation (VCE) methodology, we estimated the variance factors for each of the contributing velocity fields and combined them into a single homogenous velocity field covering whole Turkey. Results show that variance factors between velocity fields including the survey mode and continuous observations can vary a few orders of magnitude. In this study, we present the most complete velocity field in Turkey rigorously combined from 20 individual velocity fields including the 146 station CORS network and totally 1072 stations. In addition, three GPS campaigns were performed along the North Anatolian Fault and Aegean Region to fill the gap between existing velocity fields. The homogenously combined new velocity field is nearly complete in terms of geographic coverage, and will serve as the basis for further analyses such as the estimation of the deformation rates and the determination of the slip rates across main fault zones.

  11. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2016-01-01

    This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.

  12. Imaging water velocity and volume fraction distributions in water continuous multiphase flows using inductive flow tomography and electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Meng, Yiqing; Lucas, Gary P.

    2017-05-01

    This paper presents the design and implementation of an inductive flow tomography (IFT) system, employing a multi-electrode electromagnetic flow meter (EMFM) and novel reconstruction techniques, for measuring the local water velocity distribution in water continuous single and multiphase flows. A series of experiments were carried out in vertical-upward and upward-inclined single phase water flows and ‘water continuous’ gas-water and oil-gas-water flows in which the velocity profiles ranged from axisymmetric (single phase and vertical-upward multiphase flows) to highly asymmetric (upward-inclined multiphase flows). Using potential difference measurements obtained from the electrode array of the EMFM, local axial velocity distributions of the continuous water phase were reconstructed using two different IFT reconstruction algorithms denoted RT#1, which assumes that the overall water velocity profile comprises the sum of a series of polynomial velocity components, and RT#2, which is similar to RT#1 but which assumes that the zero’th order velocity component may be replaced by an axisymmetric ‘power law’ velocity distribution. During each experiment, measurement of the local water volume fraction distribution was also made using the well-established technique of electrical resistance tomography (ERT). By integrating the product of the local axial water velocity and the local water volume fraction in the cross section an estimate of the water volumetric flow rate was made which was compared with a reference measurement of the water volumetric flow rate. In vertical upward flows RT#2 was found to give rise to water velocity profiles which are consistent with the previous literature although the profiles obtained in the multiphase flows had relatively higher central velocity peaks than was observed for the single phase profiles. This observation was almost certainly a result of the transfer of axial momentum from the less dense dispersed phases to the water, which occurred preferentially at the pipe centre. For upward inclined multiphase flows RT#1 was found to give rise to water velocity profiles which are more consistent with results in the previous literature than was the case for RT#2—which leads to the tentative conclusion that the upward inclined multiphase flows investigated in the present study did not contain significant axisymmetric velocity components.

  13. Manifold absolute pressure estimation using neural network with hybrid training algorithm

    PubMed Central

    Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779

  14. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  15. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  16. Viterbi sparse spike detection and a compositional origin to ultralow-velocity zones

    NASA Astrophysics Data System (ADS)

    Brown, Samuel Paul

    Accurate interpretation of seismic travel times and amplitudes in both the exploration and global scales is complicated by the band-limited nature of seismic data. We present a stochastic method, Viterbi sparse spike detection (VSSD), to reduce a seismic waveform into a most probable constituent spike train. Model waveforms are constructed from a set of candidate spike trains convolved with a source wavelet estimate. For each model waveform, a profile hidden Markov model (HMM) is constructed to represent the waveform as a stochastic generative model with a linear topology corresponding to a sequence of samples. The Viterbi algorithm is employed to simultaneously find the optimal nonlinear alignment between a model waveform and the seismic data, and to assign a score to each candidate spike train. The most probable travel times and amplitudes are inferred from the alignments of the highest scoring models. Our analyses show that the method can resolve closely spaced arrivals below traditional resolution limits and that travel time estimates are robust in the presence of random noise and source wavelet errors. We applied the VSSD method to constrain the elastic properties of a ultralow- velocity zone (ULVZ) at the core-mantle boundary beneath the Coral Sea. We analyzed vertical component short period ScP waveforms for 16 earthquakes occurring in the Tonga-Fiji trench recorded at the Alice Springs Array (ASAR) in central Australia. These waveforms show strong pre and postcursory seismic arrivals consistent with ULVZ layering. We used the VSSD method to measure differential travel-times and amplitudes of the post-cursor arrival ScSP and the precursor arrival SPcP relative to ScP. We compare our measurements to a database of approximately 340,000 synthetic seismograms finding that these data are best fit by a ULVZ model with an S-wave velocity reduction of 24%, a P-wave velocity reduction of 23%, a thickness of 8.5 km, and a density increase of 6%. We simultaneously constrain both P- and S-wave velocity reductions as a 1:1 ratio inside this ULVZ. This 1:1 ratio is not consistent with a partial melt origin to ULVZs. Rather, we demonstrate that a compositional origin is more likely.

  17. Measuring global monopole velocities, one by one

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl

    We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less

  18. The impact of groundwater velocity fields on streamlines in an aquifer system with a discontinuous aquitard (Inner Mongolia, China)

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Zhao, Yingwang; Xu, Hua

    2018-04-01

    Many numerical methods that simulate groundwater flow, particularly the continuous Galerkin finite element method, do not produce velocity information directly. Many algorithms have been proposed to improve the accuracy of velocity fields computed from hydraulic potentials. The differences in the streamlines generated from velocity fields obtained using different algorithms are presented in this report. The superconvergence method employed by FEFLOW, a popular commercial code, and some dual-mesh methods proposed in recent years are selected for comparison. The applications to depict hydrogeologic conditions using streamlines are used, and errors in streamlines are shown to lead to notable errors in boundary conditions, the locations of material interfaces, fluxes and conductivities. Furthermore, the effects of the procedures used in these two types of methods, including velocity integration and local conservation, are analyzed. The method of interpolating velocities across edges using fluxes is shown to be able to eliminate errors associated with refraction points that are not located along material interfaces and streamline ends at no-flow boundaries. Local conservation is shown to be a crucial property of velocity fields and can result in more accurate streamline densities. A case study involving both three-dimensional and two-dimensional cross-sectional models of a coal mine in Inner Mongolia, China, are used to support the conclusions presented.

  19. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  20. Magnetometer-only attitude and angular velocity filtering estimation for attitude changing spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Hongliang; Xu, Shijie

    2014-09-01

    This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.

Top