Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
CLFs-based optimization control for a class of constrained visual servoing systems.
Song, Xiulan; Miaomiao, Fu
2017-03-01
In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza
2018-02-01
An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.
Research on flight stability performance of rotor aircraft based on visual servo control method
NASA Astrophysics Data System (ADS)
Yu, Yanan; Chen, Jing
2016-11-01
control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.
Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Warren
2004-06-01
There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
Building large mosaics of confocal edomicroscopic images using visual servoing.
Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume
2013-04-01
Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.
Visual-servoing optical microscopy
Callahan, Daniel E.; Parvin, Bahram
2009-06-09
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time: quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Visual-servoing optical microscopy
Callahan, Daniel E [Martinez, CA; Parvin, Bahram [Mill Valley, CA
2011-05-24
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Visual-servoing optical microscopy
Callahan, Daniel E; Parvin, Bahram
2013-10-01
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Optical Flow-Based State Estimation for Guided Projectiles
2015-06-01
Computer Vision and Image Understanding. 2012;116(5):606–633. 3. Corke P, Lobo J, Dias J. An introduction to inertial and visual sensing. The...International Journal of Robotics Research. 2007;26(6):519–535. 4. Hutchinson S, Hager GD, Corke PI. A tutorial on visual servo control. Robotics and
A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.
Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu
2015-09-01
Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.
Visual servoing of a laser ablation based cochleostomy
NASA Astrophysics Data System (ADS)
Kahrs, Lüder A.; Raczkowsky, Jörg; Werner, Martin; Knapp, Felix B.; Mehrwald, Markus; Hering, Peter; Schipper, Jörg; Klenzner, Thomas; Wörn, Heinz
2008-03-01
The aim of this study is a defined, visually based and camera controlled bone removal by a navigated CO II laser on the promontory of the inner ear. A precise and minimally traumatic opening procedure of the cochlea for the implantation of a cochlear implant electrode (so-called cochleostomy) is intended. Harming the membrane linings of the inner ear can result in damage of remaining organ functions (e.g. complete deafness or vertigo). A precise tissue removal by a laser-based bone ablation system is investigated. Inside the borehole the pulsed laser beam is guided automatically over the bone by using a two mirror galvanometric scanner. The ablation process is controlled by visual servoing. For the detection of the boundary layers of the inner ear the ablation area is monitored by a color camera. The acquired pictures are analyzed by image processing. The results of this analysis are used to control the process of laser ablation. This publication describes the complete system including image processing algorithms and the concept for the resulting distribution of single laser pulses. The system has been tested on human cochleae in ex-vivo studies. Further developments could lead to safe intraoperative openings of the cochlea by a robot based surgical laser instrument.
Probe Scanning Support System by a Parallel Mechanism for Robotic Echography
NASA Astrophysics Data System (ADS)
Aoki, Yusuke; Kaneko, Kenta; Oyamada, Masami; Takachi, Yuuki; Masuda, Kohji
We propose a probe scanning support system based on force/visual servoing control for robotic echography. First, we have designed and formulated its inverse kinematics the construction of mechanism. Next, we have developed a scanning method of the ultrasound probe on body surface to construct visual servo system based on acquired echogram by the standalone medical robot to move the ultrasound probe on patient abdomen in three-dimension. The visual servo system detects local change of brightness in time series echogram, which is stabilized the position of the probe by conventional force servo system in the robot, to compensate not only periodical respiration motion but also body motion. Then we integrated control method of the visual servo with the force servo as a hybrid control in both of position and force. To confirm the ability to apply for actual abdomen, we experimented the total system to follow the gallbladder as a moving target to keep its position in the echogram by minimizing variation of reaction force on abdomen. As the result, the system has a potential to be applied to automatic detection of human internal organ.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.
Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos
2017-08-12
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.
Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller
Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos
2017-01-01
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
NASA Astrophysics Data System (ADS)
Hassanzadeh, Iraj; Janabi-Sharifi, Farrokh
2005-12-01
In this paper, a new open architecture for visual servo control tasks is illustrated. A Puma-560 robotic manipulator is used to prove the concept. This design enables doing hybrid forcehisual servo control in an unstructured environment in different modes. Also, it can be controlled through Internet in teleoperation mode using a haptic device. Our proposed structure includes two major parts, hardware and software. In terms of hardware, it consists of a master (host) computer, a slave (target) computer, a Puma 560 manipulator, a CCD camera, a force sensor and a haptic device. There are five DAQ cards, interfacing Puma 560 and a slave computer. An open architecture package is developed using Matlab (R), Simulink (R) and XPC target toolbox. This package has the Hardware-In-the-Loop (HIL) property, i.e., enables one to readily implement different configurations of force, visual or hybrid control in real time. The implementation includes the following stages. First of all, retrofitting of puma was carried out. Then a modular joint controller for Puma 560 was realized using Simulink (R). Force sensor driver and force control implementation were written, using sjknction blocks of Simulink (R). Visual images were captured through Image Acquisition Toolbox of Matlab (R), and processed using Image Processing Toolbox. A haptic device interface was also written in Simulink (R). Thus, this setup could be readily reconfigured and accommodate any other robotic manipulator and/or other sensors without the trouble of the external issues relevant to the control, interface and software, while providing flexibility in components modification.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B
2018-01-01
The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium. Copyright © 2017 by the Congress of Neurological Surgeons
Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.
Chen, Jian; Jia, Bingxi; Zhang, Kaixiang
2017-11-01
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
Enhancement of tracking performance in electro-optical system based on servo control algorithm
NASA Astrophysics Data System (ADS)
Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu
2017-10-01
Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.
NASA Astrophysics Data System (ADS)
Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.
2018-01-01
This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.
Aviation Wide-Angle Visual System (AWAVS). Trainer Design Report. Subsystem Design Report
1977-05-01
205 60 Frequency-Gain Plot for FLOLS Meatball Servo 209 61 FLOLS Zoom Servo, Block Diagram 210 62 FLOLS Zoom Iris Servo, Block Diagram and...Servo Input Torques 196 24 FLOLS Servo Components 197 25 FLOLS Meatball Servo Performance 203 26 Inherent Zeros and Poles for FLOLS Meatball Servo...of their relative powers must equal the ratio of 500 ft to the simu- lated range. The FLOLS are on whenever the pilot is within the meatball field
Nakano, Shintaro; Kasai, Takatoshi; Tanno, Jun; Sugi, Keiki; Sekine, Yasumasa; Muramatsu, Toshihiro; Senbonmatsu, Takaaki; Nishimura, Shigeyuki
2015-08-01
Adaptive servo-ventilation has a potential sympathoinhibitory effect in acute cardiogenic pulmonary oedema (ACPO). To evaluate the acute effects of adaptive servo-ventilation in patients with ACPO. Fifty-eight consecutive patients with ACPO were divided into those who underwent adaptive servo-ventilation and those who received oxygen therapy alone as part of their immediate care. Visual analogue scale, vital signs, blood gas data and plasma catecholamine concentrations at baseline and 1 h during emergency care, and subsequent clinical events (death within 30 days, intubation within seven days or between seven and 30 days, and length of hospital stay) were assessed. Pre-matched and post-propensity score (PS)-matched datasets were analysed. During the first hour of adaptive servo-ventilation, plasma catecholamine concentrations fell significantly (baseline versus 1 h: epinephrine p = 0.003, norepinephrine p < 0.001, dopamine p < 0.001), with falls in blood pressure, heart rate, respiratory rate and pCO2, and rise in HCO3 and pH. In the PS-matched model, visual analogue scale (p = 0.036), systolic blood pressure (from 153.8 ± 30.7 to 133.1 ± 16.3 mmHg; p = 0.025) and plasma dopamine concentration (p = 0.034) fell significantly in the adaptive servo-ventilation group compared with the oxygen therapy alone group. The clinical outcomes between the groups were comparable. In patients with ACPO, emergency care using adaptive servo-ventilation attenuated plasma catecholamine concentrations and led to the improvement of dyspnoea, vital signs and acid-base balance, without adversely influencing clinical outcomes. Using adaptive servo-ventilation, rather than standard oxygen alone, may relieve dyspnoea and improve haemodynamic status, possibly by modulating sympathetic nerve activity. © The European Society of Cardiology 2014.
Chanel, Laure-Anais; Nageotte, Florent; Vappou, Jonathan; Luo, Jianwen; Cuvillon, Loic; de Mathelin, Michel
2015-01-01
High Intensity Focused Ultrasound (HIFU) therapy is a very promising method for ablation of solid tumors. However, intra-abdominal organ motion, principally due to breathing, is a substantial limitation that results in incorrect tumor targeting. The objective of this work is to develop an all-in-one robotized HIFU system that can compensate motion in real-time during HIFU treatment. To this end, an ultrasound visual servoing scheme working at 20 Hz was designed. It relies on the motion estimation by using a fast ultrasonic speckle tracking algorithm and on the use of an interleaved imaging/HIFU sonication sequence for avoiding ultrasonic wave interferences. The robotized HIFU system was tested on a sample of chicken breast undergoing a vertical sinusoidal motion at 0.25 Hz. Sonications with and without motion compensation were performed in order to assess the effect of motion compensation on thermal lesions induced by HIFU. Motion was reduced by more than 80% thanks to this ultrasonic visual servoing system.
Vision-guided gripping of a cylinder
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.
Matzuk, T; Skolnick, M L
1978-07-01
This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.
Inspection of Pole-Like Structures Using a Visual-Inertial Aided VTOL Platform with Shared Autonomy
Sa, Inkyu; Hrabar, Stefan; Corke, Peter
2015-01-01
This paper presents an algorithm and a system for vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures such as light and power distribution poles is a difficult task that is time-consuming, dangerous and expensive. Recently, micro VTOL platforms (i.e., quad-, hexa- and octa-rotors) have been rapidly gaining interest in research, military and even public domains. The unmanned, low-cost and VTOL properties of these platforms make them ideal for situations where inspection would otherwise be time-consuming and/or hazardous to humans. There are, however, challenges involved with developing such an inspection system, for example flying in close proximity to a target while maintaining a fixed stand-off distance from it, being immune to wind gusts and exchanging useful information with the remote user. To overcome these challenges, we require accurate and high-update rate state estimation and high performance controllers to be implemented onboard the vehicle. Ease of control and a live video feed are required for the human operator. We demonstrate a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. Two approaches are presented: Position-Based Visual Servoing (PBVS) using an Extended Kalman Filter (EKF) and estimator-free Image-Based Visual Servoing (IBVS). Both use monocular visual, inertia, and sonar data, allowing the approaches to be applied for indoor or GPS-impaired environments. We extensively compare the performances of PBVS and IBVS in terms of accuracy, robustness and computational costs. Results from simulations and indoor/outdoor (day and night) flight experiments demonstrate the system is able to successfully inspect and circumnavigate a vertical pole. PMID:26340631
Implementation and Validation of Bioplausible Visual Servoing Control
2013-03-01
achieve pose stabilization in the context of one -dimensional (1-D) attitude stabilization. These results have been benchmarked against an ideal...scenes representing low (bottom) and high (top) contrast environments used in testing the TurtleBot on the two algorithms...The graph on the left corresponds to the high-contrast simulation environment, and the image on the right corresponds to the low-contrast
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Automated, on-board terrain analysis for precision landings
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
Active Guidance of a Handheld Micromanipulator using Visual Servoing.
Becker, Brian C; Voros, Sandrine; Maclachlan, Robert A; Hager, Gregory D; Riviere, Cameron N
2009-05-12
In microsurgery, a surgeon often deals with anatomical structures of sizes that are close to the limit of the human hand accuracy. Robotic assistants can help to push beyond the current state of practice by integrating imaging and robot-assisted tools. This paper demonstrates control of a handheld tremor reduction micromanipulator with visual servo techniques, aiding the operator by providing three behaviors: snap-to, motion-scaling, and standoff-regulation. A stereo camera setup viewing the workspace under high magnification tracks the tip of the micromanipulator and the desired target object being manipulated. Individual behaviors activate in task-specific situations when the micromanipulator tip is in the vicinity of the target. We show that the snap-to behavior can reach and maintain a position at a target with an accuracy of 17.5 ± 0.4μm Root Mean Squared Error (RMSE) distance between the tip and target. Scaling the operator's motions and preventing unwanted contact with non-target objects also provides a larger margin of safety.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.
Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin
2015-08-01
This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.
Opfermann, Justin D.; Leonard, Simon; Decker, Ryan S.; Uebele, Nicholas A.; Bayne, Christopher E.; Joshi, Arjun S.; Krieger, Axel
2017-01-01
This paper specifies a surgical robot performing semi-autonomous electrosurgery for tumor resection and evaluates its accuracy using a visual servoing paradigm. We describe the design and integration of a novel, multi-degree of freedom electrosurgical tool for the smart tissue autonomous robot (STAR). Standardized line tests are executed to determine ideal cut parameters in three different types of porcine tissue. STAR is then programmed with the ideal cut setting for porcine tissue and compared against expert surgeons using open and laparoscopic techniques in a line cutting task. We conclude with a proof of concept demonstration using STAR to semi-autonomously resect pseudo-tumors in porcine tissue using visual servoing. When tasked to excise tumors with a consistent 4mm margin, STAR can semi-autonomously dissect tissue with an average margin of 3.67 mm and a standard deviation of 0.89mm. PMID:29503760
Visual Servoing for Optimization of Anticancer Drug Uptake in Human Breast Cancer Cells
2000-09-01
successfully obtained new DOE Medical Applications Program funding for this research (included in Appendix G: Automated Imaging System for Guiding Antisense ...Guiding Antisense Compounds to Specific mRNVA targets in Living Cells ) that will support this integration and development work with Dr. Parvin and Deep...a DNA and RNA binding fluorescence probe with a very different emission wavelengths, depending on whether it is bound to DNA or RNA ). Cells were then
A new neural net approach to robot 3D perception and visuo-motor coordination
NASA Technical Reports Server (NTRS)
Lee, Sukhan
1992-01-01
A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.
Bio-inspired optical rotation sensor
NASA Astrophysics Data System (ADS)
O'Carroll, David C.; Shoemaker, Patrick A.; Brinkworth, Russell S. A.
2007-01-01
Traditional approaches to calculating self-motion from visual information in artificial devices have generally relied on object identification and/or correlation of image sections between successive frames. Such calculations are computationally expensive and real-time digital implementation requires powerful processors. In contrast flies arrive at essentially the same outcome, the estimation of self-motion, in a much smaller package using vastly less power. Despite the potential advantages and a few notable successes, few neuromorphic analog VLSI devices based on biological vision have been employed in practical applications to date. This paper describes a hardware implementation in aVLSI of our recently developed adaptive model for motion detection. The chip integrates motion over a linear array of local motion processors to give a single voltage output. Although the device lacks on-chip photodetectors, it includes bias circuits to use currents from external photodiodes, and we have integrated it with a ring-array of 40 photodiodes to form a visual rotation sensor. The ring configuration reduces pattern noise and combined with the pixel-wise adaptive characteristic of the underlying circuitry, permits a robust output that is proportional to image rotational velocity over a large range of speeds, and is largely independent of either mean luminance or the spatial structure of the image viewed. In principle, such devices could be used as an element of a velocity-based servo to replace or augment inertial guidance systems in applications such as mUAVs.
A Visual Servoing-Based Method for ProCam Systems Calibration
Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie
2013-01-01
Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121
Parallel computation of level set method for 500 Hz visual servo control
NASA Astrophysics Data System (ADS)
Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi
2008-11-01
We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.
NASA Technical Reports Server (NTRS)
Key, David L.; Heffley, Robert K.
2002-01-01
The purpose of the study was to develop generic design principles for obtaining attitude command response in moderate to aggressive maneuvers without increasing SCAS series servo authority from the existing +/- 10%. In particular, to develop a scheme that would work on the UH-60 helicopter so that it can be considered for incorporation in future upgrades. The basic math model was a UH-60A version of GENHEL. The simulation facility was the NASA-Ames Vertical Motion Simulator (VMS). Evaluation tasks were Hover, Acceleration-Deceleration, and Sidestep, as defined in ADS-33D-PRF for Degraded Visual Environment (DVE). The DVE was adjusted to provide a Usable Cue Environment (UCE) equal to two. The basic concept investigated was the extent to which the limited attitude command authority achievable by the series servo could be supplemented by a 10%/sec trim servo. The architecture used provided angular rate feedback to only the series servo, shared the attitude feedback between the series and trim servos, and when the series servo approached saturation the attitude feedback was slowly phased out. Results show that modest use of the trim servo does improve pilot ratings, especially in and around hover. This improvement can be achieved with little degradation in response predictability during moderately aggressive maneuvers.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Feature Visibility Limits in the Non-Linear Enhancement of Turbid Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.
2003-01-01
The advancement of non-linear processing methods for generic automatic clarification of turbid imagery has led us from extensions of entirely passive multiscale Retinex processing to a new framework of active measurement and control of the enhancement process called the Visual Servo. In the process of testing this new non-linear computational scheme, we have identified that feature visibility limits in the post-enhancement image now simplify to a single signal-to-noise figure of merit: a feature is visible if the feature-background signal difference is greater than the RMS noise level. In other words, a signal-to-noise limit of approximately unity constitutes a lower limit on feature visibility.
Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda
2018-04-08
The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system's capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks.
Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda
2018-01-01
The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system’s capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks. PMID:29642495
Comparative evaluation of monocular augmented-reality display for surgical microscopes.
Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N
2012-01-01
Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.
Sutton, G. G.; Sykes, K.
1967-01-01
1. When a subject attempts to exert a steady pressure on a joystick he makes small unavoidable errors which, irrespective of their origin or frequency, may be called tremor. 2. Frequency analysis shows that low frequencies always contribute much more to the total error than high frequencies. If the subject is not allowed to check his performance visually, but has to rely on sensations of pressure in the finger tips, etc., the error power spectrum plotted on logarithmic co-ordinates approximates to a straight line falling at 6 db/octave from 0·4 to 9 c/s. In other words the amplitude of the tremor component at each frequency is inversely proportional to frequency. 3. When the subject is given a visual indication of his errors on an oscilloscope the shape of the tremor spectrum alters. The most striking change is the appearance of a tremor peak at about 9 c/s, but there is also a significant increase of error in the range 1-4 c/s. The extent of these changes varies from subject to subject. 4. If the 9 c/s peak represents oscillation of a muscle length-servo it would appear that greater use is made of this servo when positional information is available from the eyes than when proprioceptive impulses from the limbs have to be relied on. ImagesFig. 2 PMID:6048997
NASA Astrophysics Data System (ADS)
Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.
2017-12-01
Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-01-01
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Photoelectric radar servo control system based on ARM+FPGA
NASA Astrophysics Data System (ADS)
Wu, Kaixuan; Zhang, Yue; Li, Yeqiu; Dai, Qin; Yao, Jun
2016-01-01
In order to get smaller, faster, and more responsive requirements of the photoelectric radar servo control system. We propose a set of core ARM + FPGA architecture servo controller. Parallel processing capability of FPGA to be used for the encoder feedback data, PWM carrier modulation, A, B code decoding processing and so on; Utilizing the advantage of imaging design in ARM Embedded systems achieves high-speed implementation of the PID algorithm. After the actual experiment, the closed-loop speed of response of the system cycles up to 2000 times/s, in the case of excellent precision turntable shaft, using a PID algorithm to achieve the servo position control with the accuracy of + -1 encoder input code. Firstly, This article carry on in-depth study of the embedded servo control system hardware to determine the ARM and FPGA chip as the main chip with systems based on a pre-measured target required to achieve performance requirements, this article based on ARM chip used Samsung S3C2440 chip of ARM7 architecture , the FPGA chip is chosen xilinx's XC3S400 . ARM and FPGA communicate by using SPI bus, the advantage of using SPI bus is saving a lot of pins for easy system upgrades required thereafter. The system gets the speed datas through the photoelectric-encoder that transports the datas to the FPGA, Then the system transmits the datas through the FPGA to ARM, transforms speed datas into the corresponding position and velocity data in a timely manner, prepares the corresponding PWM wave to control motor rotation by making comparison between the position data and the velocity data setted in advance . According to the system requirements to draw the schematics of the photoelectric radar servo control system and PCB board to produce specially. Secondly, using PID algorithm to control the servo system, the datas of speed obtained from photoelectric-encoder is calculated position data and speed data via high-speed digital PID algorithm and coordinate models. Finally, a large number of experiments verify the reliability of embedded servo control system's functions, the stability of the program and the stability of the hardware circuit. Meanwhile, the system can also achieve the satisfactory of user experience, to achieve a multi-mode motion, real-time motion status monitoring, online system parameter changes and other convenient features.
Adaptive Control Responses to Behavioral Perturbation Based Upon the Insect
2006-11-01
the legs. Visual Sensors Antennal Mechanosensors Antennal Chemosensors Descending Interneurons Controlling Yaw...animals, the antenna were moved back and forth several times with servo motors to identify units that respond to antennal movement in either direction or...role of antennal postures and movements in plume tracking behavior. To date, results have shown that male moths tracking plumes in different wind
Homography-based visual servo regulation of mobile robots.
Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash
2005-10-01
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
Scene Context Dependency of Pattern Constancy of Time Series Imagery
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2008-01-01
A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.
MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement.
Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Dimaio, Simon P; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor
2008-06-01
Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system.
NASA Astrophysics Data System (ADS)
Tyliszczak, T.; Hitchcock, P.; Kilcoyne, A. L. D.; Ade, H.; Hitchcock, A. P.; Fakra, S.; Steele, W. F.; Warwick, T.
2002-03-01
Two new scanning x-ray transmission microscopes are being built at beamline 5.3.2 and beamline 7.0 of the Advanced Light Source that have novel aspects in their control and acquisition systems. Both microscopes use multiaxis laser interferometry to improve the precision of pixel location during imaging and energy scans as well as to remove image distortions. Beam line 5.3.2 is a new beam line where the new microscope will be dedicated to studies of polymers in the 250-600 eV energy range. Since this is a bending magnet beam line with lower x-ray brightness than undulator beam lines, special attention is given to the design not only to minimize distortions and vibrations but also to optimize the controls and acquisition to improve data collection efficiency. 5.3.2 microscope control and acquisition is based on a PC computer running WINDOWS 2000. All mechanical stages are moved by stepper motors with rack mounted controllers. A dedicated counter board is used for counting and timing and a multi-input/output board is used for analog acquisition and control of the focusing mirror. A three axis differential laser interferometer is being used to improve stability and precision by careful tracking of the relative positions of the sample and zone plate. Each axis measures the relative distance between a mirror placed on the sample stage and a mirror attached to the zone plate holder. Agilent Technologies HP 10889A servo-axis interferometer boards are used. While they were designed to control servo motors, our tests show that they can be used to directly control the piezo stage. The use of the interferometer servo-axis boards provides excellent point stability for spectral measurements. The interferometric feedback also provides active vibration isolation which reduces deleterious impact of mechanical vibrations up to 20-30 Hz. It also can improve the speed and precision of image scans. Custom C++ software has been written to provide user friendly control of the microscope and integration with visual light microscopy indexing of the samples. The beam line 7.0 microscope upgrade is a new design which will replace the existing microscope. The design is similar to that of beam line 5.3.2, including interferometric position encoding. However the acquisition and control is based on VXI systems, a Sun computer, and LABVIEW™ software. The main objective of the BL 7.0 microscope upgrade is to achieve precise image scans at very high speed (pixel dwells as short as 10 μs) to take full advantage of the high brightness of the 7.0 undulator beamline. Results of tests and a discussion of the benefits of our scanning microscope designs will be presented.
NASA Astrophysics Data System (ADS)
Yu, Shi Jing; Fajeau, Emma; Liu, Lin Qiao; Jones, David J.; Madison, Kirk W.
2018-02-01
In this work, we address the advantages, limitations, and technical subtleties of employing field programmable gate array (FPGA)-based digital servos for high-bandwidth feedback control of lasers in atomic, molecular, and optical physics experiments. Specifically, we provide the results of benchmark performance tests in experimental setups including noise, bandwidth, and dynamic range for two digital servos built with low and mid-range priced FPGA development platforms. The digital servo results are compared to results obtained from a commercially available state-of-the-art analog servo using the same plant for control (intensity stabilization). The digital servos have feedback bandwidths of 2.5 MHz, limited by the total signal latency, and we demonstrate improvements beyond the transfer function offered by the analog servo including a three-pole filter and a two-pole filter with phase compensation to suppress resonances. We also discuss limitations of our FPGA-servo implementation and general considerations when designing and using digital servos.
Yu, Shi Jing; Fajeau, Emma; Liu, Lin Qiao; Jones, David J; Madison, Kirk W
2018-02-01
In this work, we address the advantages, limitations, and technical subtleties of employing field programmable gate array (FPGA)-based digital servos for high-bandwidth feedback control of lasers in atomic, molecular, and optical physics experiments. Specifically, we provide the results of benchmark performance tests in experimental setups including noise, bandwidth, and dynamic range for two digital servos built with low and mid-range priced FPGA development platforms. The digital servo results are compared to results obtained from a commercially available state-of-the-art analog servo using the same plant for control (intensity stabilization). The digital servos have feedback bandwidths of 2.5 MHz, limited by the total signal latency, and we demonstrate improvements beyond the transfer function offered by the analog servo including a three-pole filter and a two-pole filter with phase compensation to suppress resonances. We also discuss limitations of our FPGA-servo implementation and general considerations when designing and using digital servos.
NASA Astrophysics Data System (ADS)
Brown, T.; Borevitz, J. O.; Zimmermann, C.
2010-12-01
We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually retrieved at bi-weekly intervals. Our longer-term goal is to make gigapixel time-lapse datasets available online in an interactive interface that layers plant-level phenology data with gigapixel resolution images, genomic sequence data from individual plants with weather and other abitotic sensor data. Co-visualization of all of these data types provides researchers with a powerful new tool for examining complex ecological interactions across scales from the individual to the ecosystem. We will present detailed phenostage data from more than 100 plants of multiple species from our Gigavision timelapse camera at our “Big Blowout East” field site in the Indiana Dunes State Park, IN. This camera has been recording three to four 700 million pixel images a day since February 28, 2010. The camera field of view covers an area of about 7 hectares resulting in an average image resolution of about 1 pixel per centimeter over the entire site. We will also discuss some of the many technological challenges with developing and maintaining these types of hardware systems, collecting quantitative data from gigapixel resolution time-lapse data and effectively managing terabyte-sized datasets of millions of images.
Noncontact optical motion sensing for real-time analysis
NASA Astrophysics Data System (ADS)
Fetzer, Bradley R.; Imai, Hiromichi
1990-08-01
The adaptation of an image dissector tube (IDT) within the OPTFOLLOW system provides high resolution displacement measurement of a light discontinuity. Due to the high speed response of the IDT and the advanced servo loop circuitry, the system is capable of real time analysis of the object under test. The image of the discontinuity may be contoured by direct or reflected light and ranges spectrally within the field of visible light. The image is monitored to 500 kHz through a lens configuration which transposes the optical image upon the photocathode of the IDT. The photoelectric effect accelerates the resultant electrons through a photomultiplier and an enhanced current is emitted from the anode. A servo loop controls the electron beam, continually centering it within the IDT using magnetic focusing of deflection coils. The output analog voltage from the servo amplifier is thereby proportional to the displacement of the target. The system is controlled by a microprocessor with a 32kbyte memory and provides a digital display as well as instructional readout on a color monitor allowing for offset image tracking and automatic system calibration.
Phase-Division-Based Dynamic Optimization of Linkages for Drawing Servo Presses
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Gang; Wang, Li-Ping; Cao, Yan-Ke
2017-11-01
Existing linkage-optimization methods are designed for mechanical presses; few can be directly used for servo presses, so development of the servo press is limited. Based on the complementarity of linkage optimization and motion planning, a phase-division-based linkage-optimization model for a drawing servo press is established. Considering the motion-planning principles of a drawing servo press, and taking account of work rating and efficiency, the constraints of the optimization model are constructed. Linkage is optimized in two modes: use of either constant eccentric speed or constant slide speed in the work segments. The performances of optimized linkages are compared with those of a mature linkage SL4-2000A, which is optimized by a traditional method. The results show that the work rating of a drawing servo press equipped with linkages optimized by this new method improved and the root-mean-square torque of the servo motors is reduced by more than 10%. This research provides a promising method for designing energy-saving drawing servo presses with high work ratings.
Multisensory visual servoing by a neural network.
Wei, G Q; Hirzinger, G
1999-01-01
Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.
Skolnick, M L; Matzuk, T
1978-08-01
This paper describes a new real-time servo-controlled sector scanner that produces high-resolution images similar to phased-array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. Its unique feature is the transducer head which contains a single moving part--the transducer. Frame rates vary from 0 to 30 degrees and the sector angle from 0 to 60 degrees. Abdominal applications include: differentiation of vascular structures, detection of small masses, imaging of diagonally oriented organs. Survey scanning, and demonstration of regions difficult to image with contact scanners. Cardiac uses are also described.
Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.
López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth
2010-08-01
In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.
Advanced telepresence surgery system development.
Jensen, J F; Hill, J W
1996-01-01
SRI International is currently developing a prototype remote telepresence surgery system, for the Advanced Research Projects Agency (ARPA), that will bring life-saving surgical care to wounded soldiers in the zone of combat. Remote surgery also has potentially important applications in civilian medicine. In addition, telepresence will find wide medical use in local surgery, in endoscopic, laparoscopic, and microsurgery applications. Key elements of the telepresence technology now being developed for ARPA, including the telepresence surgeon's workstation (TSW) and associated servo control systems, will have direct application to these areas of minimally invasive surgery. The TSW technology will also find use in surgical training, where it will provide an immersive visual and haptic interface for interaction with computer-based anatomical models. In this paper, we discuss our ongoing development of the MEDFAST telesurgery system, focusing on the TSW man-machine interface and its associated servo control electronics.
MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement
Fischer, Gregory S.; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; DiMaio, Simon P.; Tempany, Clare M.; Hata, Nobuhiko; Fichtinger, Gabor
2010-01-01
Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system. PMID:21057608
Theoretical and Experimental Study of Light Shift in a CPT-Based RB Vapor Cell Frequency Standard
2001-01-01
Questions and Answers ROBERT LUTWAK (Datum): When you servo the microwave power to eliminate the light shift, what do you servo to? To what are you...leveling that signal? MIA0 ZHU: Do you mean what I servo to o r where did I do the servo? LUTWAK : What is the error signal that determines the TR
A servo controlled gradient loading triaxial model test system for deep-buried cavern.
Chen, Xu-guang; Zhang, Qiang-yong; Li, Shu-cai
2015-10-01
A servo controlled gradient loading model test system is developed to simulate the gradient geostress in deep-buried cavern. This system consists of the gradient loading apparatus, the digital servo control device, and the measurement system. Among them, the gradient loading apparatus is the main component which is used for exerting load onto the model. This loading apparatus is placed inside the counterforce wall/beam and is divided to several different loading zones, with each loading zone independently controlled. This design enables the gradient loading. Hence, the "real" geostress field surrounding the deep-buried cavern can be simulated. The loading or unloading process can be controlled by the human-computer interaction machines, i.e., the digital servo control system. It realizes the automation and visualization of model loading/unloading. In addition, this digital servo could control and regulate hydraulic loading instantaneously, which stabilizes the geostress onto the model over a long term. During the loading procedure, the collision between two adjacent loading platens is also eliminated by developing a guide frame. This collision phenomenon is induced by the volume shrinkage of the model when compressed in true 3D state. In addition, several accurate measurements, including the optical and grating-based method, are adopted to monitor the small deformation of the model. Hence, the distortion of the model could be accurately measured. In order to validate the performance of this innovative model test system, a 3D geomechanical test was conducted on a simulated deep-buried underground reservoir. The result shows that the radial convergence increases rapidly with the release of the stress in the reservoir. Moreover, the deformation increases with the increase of the gas production rate. This observation is consistence with field observation in petroleum engineering. The system is therefore capable of testing deep-buried engineering structures.
NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Integrated Cuing Requirements (ICR) Study: Demonstration Data Base and Users Guide.
1983-07-01
viewed with a servo-mounted televison camera and used to provide a visual scene for an observer in an ATD. Modulation: Mathematically, the absolute...i(b). CROSS REFERENCE The impact of stationary scene RESULTS. . details was also tested in this See (c) study. See Figure 33.5-1. Ial TEST APPARATUS...size. (See the discussion of * the impact of perceived distance on perceived size in Section 31._.) Figure 33.4-1 Perceived Distance and Velocity of Self
PointCom: semi-autonomous UGV control with intuitive interface
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham
2008-04-01
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).
The research on visual industrial robot which adopts fuzzy PID control algorithm
NASA Astrophysics Data System (ADS)
Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye
2017-03-01
The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.
Model-based nonlinear control of hydraulic servo systems: Challenges, developments and perspectives
NASA Astrophysics Data System (ADS)
Yao, Jianyong
2018-06-01
Hydraulic servo system plays a significant role in industries, and usually acts as a core point in control and power transmission. Although linear theory-based control methods have been well established, advanced controller design methods for hydraulic servo system to achieve high performance is still an unending pursuit along with the development of modern industry. Essential nonlinearity is a unique feature and makes model-based nonlinear control more attractive, due to benefit from prior knowledge of the servo valve controlled hydraulic system. In this paper, a discussion for challenges in model-based nonlinear control, latest developments and brief perspectives of hydraulic servo systems are presented: Modelling uncertainty in hydraulic system is a major challenge, which includes parametric uncertainty and time-varying disturbance; some specific requirements also arise ad hoc difficulties such as nonlinear friction during low velocity tracking, severe disturbance, periodic disturbance, etc.; to handle various challenges, nonlinear solutions including parameter adaptation, nonlinear robust control, state and disturbance observation, backstepping design and so on, are proposed and integrated, theoretical analysis and lots of applications reveal their powerful capability to solve pertinent problems; and at the end, some perspectives and associated research topics (measurement noise, constraints, inner valve dynamics, input nonlinearity, etc.) in nonlinear hydraulic servo control are briefly explored and discussed.
Fuzzy model-based servo and model following control for nonlinear systems.
Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O
2009-12-01
This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Servo Platform Circuit Design of Pendulous Gyroscope Based on DSP
NASA Astrophysics Data System (ADS)
Tan, Lilong; Wang, Pengcheng; Zhong, Qiyuan; Zhang, Cui; Liu, Yunfei
2018-03-01
In order to solve the problem when a certain type of pendulous gyroscope in the initial installation deviation more than 40 degrees, that the servo platform can not be up to the speed of the gyroscope in the rough north seeking phase. This paper takes the digital signal processor TMS320F28027 as the core, uses incremental digital PID algorithm, carries out the circuit design of the servo platform. Firstly, the hardware circuit is divided into three parts: DSP minimum system, motor driving circuit and signal processing circuit, then the mathematical model of incremental digital PID algorithm is established, based on the model, writes the PID control program in CCS3.3, finally, the servo motor tracking control experiment is carried out, it shows that the design can significantly improve the tracking ability of the servo platform, and the design has good engineering practice.
Optics derotator servo control system for SONG Telescope
NASA Astrophysics Data System (ADS)
Xu, Jin; Ren, Changzhi; Ye, Yu
2012-09-01
The Stellar Oscillations Network Group (SONG) is an initiative which aims at designing and building a groundbased network of 1m telescopes dedicated to the study of phenomena occurring in the time domain. Chinese standard node of SONG is an Alt-Az Telescope of F/37 with 1m diameter. Optics derotator control system of SONG telescope adopts the development model of "Industrial Computer + UMAC Motion Controller + Servo Motor".1 Industrial computer is the core processing part of the motion control, motion control card(UMAC) is in charge of the details on the motion control, Servo amplifier accepts the control commands from UMAC, and drives the servo motor. The position feedback information comes from the encoder, to form a closed loop control system. This paper describes in detail hardware design and software design for the optics derotator servo control system. In terms of hardware design, the principle, structure, and control algorithm of servo system based on optics derotator are analyzed and explored. In terms of software design, the paper proposes the architecture of the system software based on Object-Oriented Programming.
Servo-controlled intravital microscope system
NASA Technical Reports Server (NTRS)
Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)
1975-01-01
A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.
The International Solid Earth Research Virtual Observatory
NASA Astrophysics Data System (ADS)
Fox, G.; Pierce, M.; Rundle, J.; Donnellan, A.; Parker, J.; Granat, R.; Lyzenga, G.; McLeod, D.; Grant, L.
2004-12-01
We describe the architecture and initial implementation of the International Solid Earth Research Virtual Observatory (iSERVO). This has been prototyped within the USA as SERVOGrid and expansion is planned to Australia, China, Japan and other countries. We base our design on a globally scalable distributed "cyber-infrastructure" or Grid built around a Web Services-based approach consistent with the extended Web Service Interoperability approach. The Solid Earth Science Working Group of NASA has identified several challenges for Earth Science research. In order to investigate these, we need to couple numerical simulation codes and data mining tools to observational data sets. This observational data are now available on-line in internet-accessible forms, and the quantity of this data is expected to grow explosively over the next decade. We architect iSERVO as a loosely federated Grid of Grids with each country involved supporting a national Solid Earth Research Grid. The national Grid Operations, possibly with dedicated control centers, are linked together to support iSERVO where an International Grid control center may eventually be necessary. We address the difficult multi-administrative domain security and ownership issues by exposing capabilities as services for which the risk of abuse is minimized. We support large scale simulations within a single domain using service-hosted tools (mesh generation, data repository and sensor access, GIS, visualization). Simulations typically involve sequential or parallel machines in a single domain supported by cross-continent services. We use Web Services implement Service Oriented Architecture (SOA) using WSDL for service description and SOAP for message formats. These are augmented by UDDI, WS-Security, WS-Notification/Eventing and WS-ReliableMessaging in the WS-I+ approach. Support for the latter two capabilities will be available over the next 6 months from the NaradaBrokering messaging system. We augment these specifications with the powerful portlet architecture using WSRP and JSR168 supported by such portal containers as uPortal, WebSphere, and Apache JetSpeed2. The latter portal aggregates component user interfaces for each iSERVO service allowing flexible customization of the user interface. We exploit the portlets produced by the NSF NMI (Middleware initiative) OGCE activity. iSERVO also uses specifications from the Open Geographical Information Systems (GIS) Consortium (OGC) that defines a number of standards for modeling earth surface feature data and services for interacting with this data. The data models are expressed in the XML-based Geography Markup Language (GML), and the OGC service framework are being adapted to use the Web Service model. The SERVO prototype includes a GIS Grid that currently includes the core WMS and WFS (Map and Feature) services. We will follow the best practice in the Grid and Web Service field and will adapt our technology as appropriate. For example, we expect to support services built on WS-RF when is finalized and to make use of the database interfaces OGSA-DAI and its WS-I+ versions. Finally, we review advances in Web Service scripting (such as HPSearch) and workflow systems (such as GCF) and their applications to iSERVO.
Research on phase locked loop in optical memory servo system
NASA Astrophysics Data System (ADS)
Qin, Liqin; Ma, Jianshe; Zhang, Jianyong; Pan, Longfa; Deng, Ming
2005-09-01
Phase locked loop (PLL) is a closed loop automatic control system, which can track the phase of input signal. It widely applies in each area of electronic technology. This paper research the phase locked loop in optical memory servo area. This paper introduces the configuration of digital phase locked loop (PLL) and phase locked servo system, the control theory, and analyses system's stability. It constructs the phase locked loop experiment system of optical disk spindle servo, which based on special chip. DC motor is main object, this system adopted phase locked servo technique and digital signal processor (DSP) to achieve constant linear velocity (CLV) in controlling optical spindle motor. This paper analyses the factors that affect the stability of phase locked loop in spindle servo system, and discusses the affection to the optical disk readout signal and jitter due to the stability of phase locked loop.
Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-02-24
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.
Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-01-01
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102
Permanent magnet synchronous motor servo system control based on μC/OS
NASA Astrophysics Data System (ADS)
Shi, Chongyang; Chen, Kele; Chen, Xinglong
2015-10-01
When Opto-Electronic Tracking system operates in complex environments, every subsystem must operate efficiently and stably. As a important part of Opto-Electronic Tracking system, the performance of PMSM(Permanent Magnet Synchronous Motor) servo system affects the Opto-Electronic Tracking system's accuracy and speed greatly[1][2]. This paper applied embedded real-time operating system μC/OS to the control of PMSM servo system, implemented SVPWM(Space Vector Pulse Width Modulation) algorithm in PMSM servo system, optimized the stability of PMSM servo system. Pointing on the characteristics of the Opto-Electronic Tracking system, this paper expanded μC/OS with software redundancy processes, remote debugging and upgrading. As a result, the Opto- Electronic Tracking system performs efficiently and stably.
Cine-servo lens technology for 4K broadcast and cinematography
NASA Astrophysics Data System (ADS)
Nurishi, Ryuji; Wakazono, Tsuyoshi; Usui, Fumiaki
2015-09-01
Central to the rapid evolution of 4K image capture technology in the past few years, deployment of large-format cameras with Super35mm Single Sensors is increasing in TV production for diverse shows such as dramas, documentaries, wildlife, and sports. While large format image capture has been the standard in the cinema world for quite some time, the recent experiences within the broadcast industry have revealed a variety of requirement differences for large format lenses compared to those of the cinema industry. A typical requirement for a broadcast lens is a considerably higher zoom ratio in order to avoid changing lenses in the middle of a live event, which is mostly not the case for traditional cinema productions. Another example is the need for compact size, light weight, and servo operability for a single camera operator shooting in a shoulder-mount ENG style. On the other hand, there are new requirements that are common to both worlds, such as smooth and seamless change in angle of view throughout the long zoom range, which potentially offers new image expression that never existed in the past. This paper will discuss the requirements from the two industries of cinema and broadcast, while at the same time introducing the new technologies and new optical design concepts applied to our latest "CINE-SERVO" lens series which presently consists of two models, CN7x17KAS-S and CN20x50IAS-H. It will further explain how Canon has realized 4K optical performance and fast servo control while simultaneously achieving compact size, light weight and high zoom ratio, by referring to patent-pending technologies such as the optical power layout, lens construction, and glass material combinations.
GMRT servo system : overview of the upgrades
NASA Astrophysics Data System (ADS)
Bagde, Shailendra
The servo system of the GMRT, designed in the early 1990s by BARC and subsequently commissioned in the antennas by 1996, is a classical nested loop control system. Some of its major subsystems are undergoing significant upgrades to increase reliability, reduce maintenance and overcome obsolescence of components. These include the solid-state interlock system, a PC104 based servo control computer, and advanced BLDC drives and motors.
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-01-01
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. PMID:27898002
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-11-25
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.
Panoramic optical-servoing for industrial inspection and repair
NASA Astrophysics Data System (ADS)
Sallinger, Christian; O'Leary, Paul; Retschnig, Alexander; Kammerhofer, Martin
2004-05-01
Recently specialized robots were introduced to perform the task of inspection and repair in large cylindrical structures such as ladles, melting furnaces and converters. This paper reports on the image processing system and optical servoing for one such a robot. A panoramic image of the vessels inner surface is produced by performing a coordinated robot motion and image acquisition. The level of projective distortion is minimized by acquiring a high density of images. Normalized phase correlation calculated via the 2D Fourier transform is used to calculate the shift between the single images. The narrow strips from the dense image map are then stitched together to build the panorama. The mapping between the panoramic image and the positioning of the robot is established during the stitching of the images. This enables optical feedback. The robots operator can locate a defect on the surface by selecting the area of the image. Calculation of the forward and inverse kinematics enable the robot to automatically move to the location on the surface requiring repair. Experimental results using a standard 6R industrial robot have shown the full functionality of the system concept. Finally, were test measurements carried out successfully, in a ladle at a temperature of 1100° C.
Visual Servoing via Navigation Functions
2002-02-06
kernel was adequate). The PC is equipped with a Data Translations12 DT3155 frame grabber connected to a standard 30Hz NTSC video camera. Using MATLAB’s C...Richard M. Murray, Zexiang Li, and S. Shankar Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, Reading, Mass., 1994. [26] Dan Pedoe
Servo action in the human thumb.
Marsden, C D; Merton, P A; Morton, H B
1976-01-01
1. The servo-like properties of muscle in healthy human subjects have been studied by interfering unexpectedly with flexion movements of the top joint of the thumb. This movement is carried out by the flexor pollicis longus muscle only. 2. The movements were standardized in rate by giving the subject a tracking task. They started off against a constant torque load offered by an electric motor. 3. In some movements the load remained constant, but in others, in mid-course, perturbations were introduced at random. Either the movement was halted, or released and allowed to accelerate by reducing the load, or reversed by suddenly increasing the current in the motor, so stretching the muscle. 4. Usually eight or sixteen responses to each kind of perturbation and a similar number of controls against a constant load were averaged. 5. Muscle activity was recorded as the electromyogram from surface electrodes over the belly of the long flexor in the lower forearm. Action potentials were usually full-wave rectified and integrated. 6. About 50 msec after a perturbation the muscle's activity alters in such a sense as to tend to compensate for the perturbation, i.e. it increases after a halt or a stretch and decreases after a release. The latency is similar in each case. 7. These responses are interpreted as manifestations of automatic servo action based on the stretch reflex. They are considered to be too early to be voluntary. 8. This interpretation was supported by measuring voluntary reaction times to perturbations under tracking conditions. They were found to be 90 msec or longer. 9. When the initial load was increased by a factor of 10, the servo responses were all scaled up likewise. Thus to a first approximation the gain of the servo is proportional to initial load. 10. It follows that in relaxed muscle the gain should be zero. This was confirmed by showing that stretching a relaxed muscle gives no reflex, or only a small one. 11. Gain appears to be determined by the level of muscle activation as determined by the effort made by the subject, rather than by the actual pressure exerted by the thumb. 12. Thus in fatigued muscle gain is boosted as the muscle has to be activated more strongly to keep up the same force output. The net effect is to compensate for fatigue and maintain the performance of the servo. 13. The Discussion centres on the implications of gain control in the servo. For a start, if the gain of the stretch reflex arc is zero in relaxed muscle, contractions cannot be initiated via the stretch reflex by simply causing the spindles to contract, as proposed on the original 'follow-up' servo theory. Images Fig. 1 PMID:133238
Parallel robot for micro assembly with integrated innovative optical 3D-sensor
NASA Astrophysics Data System (ADS)
Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer
2002-10-01
Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.
Autonomous Rock Tracking and Acquisition from a Mars Rover
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Nesnas, Issa A.; Das, Hari
1999-01-01
Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.
Study of Servo Press with a Flywheel
NASA Astrophysics Data System (ADS)
Tso, Pei-Lum; Li, Cheng-Ho
The servo press with a flywheel is able to provide flexible motions with energy-saving merit, but its true potential has not been thoroughly studied and verified. In this paper, such the “hybrid-driven” servo press is focused on, and the stamping capacity and the energy distribution between the flywheel and the servomotor are investigated. The capacity is derived based on the principle of energy conservation, and a method of using a capacity percentage plane for evaluation is proposed. A case study is included to illustrate and interpret that the stamping capacity is highly dependent on the programmed punch motions, thus the capacity prediction is always necessary while applying this kind of servo press. The energy distribution is validated by blanking experiments, and the results indicate that the servomotor needs only to provide 15% to the flywheel torque, 12% of the total stamping energy. This validates that the servomotor power is significantly saved in comparison with conventional servo presses.
Research Based on AMESim of Electro-hydraulic Servo Loading System
NASA Astrophysics Data System (ADS)
Li, Jinlong; Hu, Zhiyong
2017-09-01
Electro-hydraulic servo loading system is a subject studied by many scholars in the field of simulation and control at home and abroad. The electro-hydraulic servo loading system is a loading device simulation of stress objects by aerodynamic moment and other force in the process of movement, its function is all kinds of gas in the lab condition to analyze stress under dynamic load of objects. The purpose of this paper is the design of AMESim electro-hydraulic servo system, PID control technology is used to configure the parameters of the control system, complete the loading process under different conditions, the optimal design parameters, optimization of dynamic performance of the loading system.
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
Adaptive Servo-Ventilation for Central Sleep Apnea in Systolic Heart Failure.
Cowie, Martin R; Woehrle, Holger; Wegscheider, Karl; Angermann, Christiane; d'Ortho, Marie-Pia; Erdmann, Erland; Levy, Patrick; Simonds, Anita K; Somers, Virend K; Zannad, Faiez; Teschler, Helmut
2015-09-17
Central sleep apnea is associated with poor prognosis and death in patients with heart failure. Adaptive servo-ventilation is a therapy that uses a noninvasive ventilator to treat central sleep apnea by delivering servo-controlled inspiratory pressure support on top of expiratory positive airway pressure. We investigated the effects of adaptive servo-ventilation in patients who had heart failure with reduced ejection fraction and predominantly central sleep apnea. We randomly assigned 1325 patients with a left ventricular ejection fraction of 45% or less, an apnea-hypopnea index (AHI) of 15 or more events (occurrences of apnea or hypopnea) per hour, and a predominance of central events to receive guideline-based medical treatment with adaptive servo-ventilation or guideline-based medical treatment alone (control). The primary end point in the time-to-event analysis was the first event of death from any cause, lifesaving cardiovascular intervention (cardiac transplantation, implantation of a ventricular assist device, resuscitation after sudden cardiac arrest, or appropriate lifesaving shock), or unplanned hospitalization for worsening heart failure. In the adaptive servo-ventilation group, the mean AHI at 12 months was 6.6 events per hour. The incidence of the primary end point did not differ significantly between the adaptive servo-ventilation group and the control group (54.1% and 50.8%, respectively; hazard ratio, 1.13; 95% confidence interval [CI], 0.97 to 1.31; P=0.10). All-cause mortality and cardiovascular mortality were significantly higher in the adaptive servo-ventilation group than in the control group (hazard ratio for death from any cause, 1.28; 95% CI, 1.06 to 1.55; P=0.01; and hazard ratio for cardiovascular death, 1.34; 95% CI, 1.09 to 1.65; P=0.006). Adaptive servo-ventilation had no significant effect on the primary end point in patients who had heart failure with reduced ejection fraction and predominantly central sleep apnea, but all-cause and cardiovascular mortality were both increased with this therapy. (Funded by ResMed and others; SERVE-HF ClinicalTrials.gov number, NCT00733343.).
Adaptive fuzzy PID control of hydraulic servo control system for large axial flow compressor
NASA Astrophysics Data System (ADS)
Wang, Yannian; Wu, Peizhi; Liu, Chengtao
2017-09-01
To improve the stability of the large axial compressor, an efficient and special intelligent hydraulic servo control system is designed and implemented. The adaptive fuzzy PID control algorithm is used to control the position of the hydraulic servo cylinder steadily, which overcomes the drawback that the PID parameters should be adjusted based on the different applications. The simulation and the test results show that the system has a better dynamic property and a stable state performance.
NASA Astrophysics Data System (ADS)
Tan, Baolin; Mapps, Desmond J.; Pan, Genhua; Robinson, Paul
1996-03-01
A disk with a data, servo and isolation layer has been fabricated with the data layer magnetized along the circumferential direction. The servo layer was recorded with servo pattern magnetized along the radial direction. A continuous servo signal is obtained and the servo does not occupy any data area. In this new method, the servo and data bits can share media surface area on the disk without interference. Track following on 0.7 μm tracks has been demonstrated using the new servo method on longitudinal rigid disks.
Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging
Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.
2013-01-01
A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148
A computer-based servo system for controlling isotonic contractions of muscle.
Smith, J P; Barsotti, R J
1993-11-01
We have developed a computer-based servo system for controlling isotonic releases in muscle. This system is a composite of commercially available devices: an IBM personal computer, an analog-to-digital (A/D) board, an Akers AE801 force transducer, and a Cambridge Technology motor. The servo loop controlling the force clamp is generated by computer via the A/D board, using a program written in QuickBASIC 4.5. Results are shown that illustrate the ability of the system to clamp the force generated by either skinned cardiac trabeculae or single rabbit psoas fibers down to the resolution of the force transducer within 4 ms. This rate is independent of the level of activation of the tissue and the size of the load imposed during the release. The key to the effectiveness of the system consists of two algorithms that are described in detail. The first is used to calculate the error signal to hold force to the desired level. The second algorithm is used to calculate the appropriate gain of the servo for a particular fiber and the size of the desired load to be imposed. The results show that the described computer-based method for controlling isotonic releases in muscle represents a good compromise between simplicity and performance and is an alternative to the custom-built digital/analog servo devices currently being used in studies of muscle mechanics.
Application of IFT and SPSA to servo system control.
Rădac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M; Preitl, Stefan
2011-12-01
This paper treats the application of two data-based model-free gradient-based stochastic optimization techniques, i.e., iterative feedback tuning (IFT) and simultaneous perturbation stochastic approximation (SPSA), to servo system control. The representative case of controlled processes modeled by second-order systems with an integral component is discussed. New IFT and SPSA algorithms are suggested to tune the parameters of the state feedback controllers with an integrator in the linear-quadratic-Gaussian (LQG) problem formulation. An implementation case study concerning the LQG-based design of an angular position controller for a direct current servo system laboratory equipment is included to highlight the pros and cons of IFT and SPSA from an application's point of view. The comparison of IFT and SPSA algorithms is focused on an insight into their implementation.
Robust Hinfinity position control synthesis of an electro-hydraulic servo system.
Milić, Vladimir; Situm, Zeljko; Essert, Mario
2010-10-01
This paper focuses on the use of the techniques based on linear matrix inequalities for robust H(infinity) position control synthesis of an electro-hydraulic servo system. A nonlinear dynamic model of the hydraulic cylindrical actuator with a proportional valve has been developed. For the purpose of the feedback control an uncertain linearized mathematical model of the system has been derived. The structured (parametric) perturbations in the electro-hydraulic coefficients are taken into account. H(infinity) controller extended with an integral action is proposed. To estimate internal states of the electro-hydraulic servo system an observer is designed. Developed control algorithms have been tested experimentally in the laboratory model of an electro-hydraulic servo system. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Data-Driven Based Asynchronous Motor Control for Printing Servo Systems
NASA Astrophysics Data System (ADS)
Bian, Min; Guo, Qingyun
Modern digital printing equipment aims to the environmental-friendly industry with high dynamic performances and control precision and low vibration and abrasion. High performance motion control system of printing servo systems was required. Control system of asynchronous motor based on data acquisition was proposed. Iterative learning control (ILC) algorithm was studied. PID control was widely used in the motion control. However, it was sensitive to the disturbances and model parameters variation. The ILC applied the history error data and present control signals to approximate the control signal directly in order to fully track the expect trajectory without the system models and structures. The motor control algorithm based on the ILC and PID was constructed and simulation results were given. The results show that data-driven control method is effective dealing with bounded disturbances for the motion control of printing servo systems.
Lin, Hao-Ting
2017-06-04
This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.
Lin, Hao-Ting
2017-01-01
This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally. PMID:28587220
2010-11-01
connected. On this same disk, a servo motor is connected to a light weight leg. An Arduino 77 Body Weight Markers Leg Disk Servo Motor Front View Top View...this control enables more dynamic and fast walking, the control is based on precise joint-angle control. The main consequence of such a control is that... based climbing strategies. Specifically, the four-limbed free-climbing LEMUR robot goes up climbing walls by choosing a sequence of handholds
Nonlinear friction model for servo press simulation
NASA Astrophysics Data System (ADS)
Ma, Ninshu; Sugitomo, Nobuhiko; Kyuno, Takunori; Tamura, Shintaro; Naka, Tetsuo
2013-12-01
The friction coefficient was measured under an idealized condition for a pulse servo motion. The measured friction coefficient and its changing with both sliding distance and a pulse motion showed that the friction resistance can be reduced due to the re-lubrication during unloading process of the pulse servo motion. Based on the measured friction coefficient and its changes with sliding distance and re-lubrication of oil, a nonlinear friction model was developed. Using the newly developed the nonlinear friction model, a deep draw simulation was performed and the formability was evaluated. The results were compared with experimental ones and the effectiveness was verified.
Improvement of a Pneumatic Control Valve with Self-Holding Function
NASA Astrophysics Data System (ADS)
Dohta, Shujiro; Akagi, Tetsuya; Kobayashi, Wataru; Shimooka, So; Masago, Yusuke
2017-10-01
The purpose of this study is to develop a small-sized, lightweight and low-cost control valve with low energy consumption and to apply it to the assistive system. We have developed some control valves; a tiny on/off valve using a vibration motor, and an on/off valve with self-holding function. We have also proposed and tested the digital servo valve with self-holding function using permanent magnets and a small-sized servo motor. In this paper, in order to improve the valve, an analytical model of the digital servo valve is proposed. And the simulated results by using the analytical model and identified parameters were compared with the experimental results. Then, the improved digital servo valve was designed based on the calculated results and tested. As a result, we realized the digital servo valve that can control the flow rate more precisely while maintaining its volume and weight compared with the previous valve. As an application of the improved valve, a position control system of rubber artificial muscle was built and the position control was performed successfully.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
A Matlab/Simulink-Based Interactive Module for Servo Systems Learning
ERIC Educational Resources Information Center
Aliane, N.
2010-01-01
This paper presents an interactive module for learning both the fundamental and practical issues of servo systems. This module, developed using Simulink in conjunction with the Matlab graphical user interface (Matlab-GUI) tool, is used to supplement conventional lectures in control engineering and robotics subjects. First, the paper introduces the…
Compact, Lightweight Servo-Controllable Brakes
NASA Technical Reports Server (NTRS)
Lovchik, Christopher S.; Townsend, William; Guertin, Jeffrey; Matsuoka, Yoky
2010-01-01
Compact, lightweight servo-controllable brakes capable of high torques are being developed for incorporation into robot joints. A brake of this type is based partly on the capstan effect of tension elements. In a brake of the type under development, a controllable intermediate state of torque is reached through on/off switching at a high frequency.
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally.
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally. PMID:22319266
Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo
NASA Astrophysics Data System (ADS)
Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao
2017-10-01
Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.
NASA Astrophysics Data System (ADS)
Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.
2018-05-01
Hydraulic actuators play a key role in experimental structural dynamics. In a previous study, a physics-based model for a servo-hydraulic actuator coupled with a nonlinear physical system was developed. Later, this dynamical model was transformed into controllable canonical form for position tracking control purposes. For this study, a nonlinear device is designed and fabricated to exhibit various nonlinear force-displacement profiles depending on the initial condition and the type of materials used as replaceable coupons. Using this nonlinear system, the controllable canonical dynamical model is experimentally validated for a servo-hydraulic actuator coupled with a nonlinear physical system.
Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru
2011-06-01
Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
Design and Development of a High Speed Sorting System Based on Machine Vision Guiding
NASA Astrophysics Data System (ADS)
Zhang, Wenchang; Mei, Jiangping; Ding, Yabin
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
Influence of Forming Conditions on Springback in V-bending Process Using Servo Press
NASA Astrophysics Data System (ADS)
Abe, Shinya; Takahashi, Susumu
To improve fuel efficiency, aluminum alloys and high tensile steel sheets are increasingly being applied to automotive body parts. However, it is difficult to obtain accurate dimensions of formed parts. Therefore, technologies for reducing springback for the part formed by press are strongly demanded. It is said that the die holding time at the bottom dead center of a servo press slide can affect springback. To clarify the forming mechanisms of this phenomenon, a V bending test with a servo press was performed. Aluminum alloys sheets are applied as specimens. The location of press slide was measured by linear scales. It was found that the movement of the slide in a slide motion program differs from the actual movement of the slide. It is important to confirm if the slide is located in the position specified in the program. In addition, a springback angle measurement system is proposed that uses laser displacement measurement apparatus. Because it avoids human error, the proposed measurement system is more accurate than the image processing method.
Transistor-based interface circuitry
Taubman, Matthew S [Richland, WA
2007-02-13
Among the embodiments of the present invention is an apparatus that includes a transistor, a servo device, and a current source. The servo device is operable to provide a common base mode of operation of the transistor by maintaining an approximately constant voltage level at the transistor base. The current source is operable to provide a bias current to the transistor. A first device provides an input signal to an electrical node positioned between the emitter of the transistor and the current source. A second device receives an output signal from the collector of the transistor.
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Randle, R. J.; Williams, B. A.
1977-01-01
Possible 24-h variations in accommodation responses were investigated. A recently developed servo-controlled optometer and focus stimulator were used to obtain monocular accommodation response data on four college-age subjects. No 24-h rhythm in accommodation was shown. Heart rate and blink rate also were measured and periodicity analysis showed a mean 24-h rhythm for both; however, blink rate periodograms were significant for only two of the four subjects. Thus, with the qualifications that college students were tested instead of pilots and that they performed monocular laboratory tasks instead of binocular flight-deck tasks, it is concluded that 24-h rhythms in accommodation responses need not be considered in setting visual standards for flight-deck tasks.
The analysis of image motion by the rabbit retina
Oyster, C. W.
1968-01-01
1. Micro-electrode recordings were made from rabbit retinal ganglion cells or their axons. Of particular interest were direction-selective units; the common on—off type represented 20·6% of the total sample (762 units), and the on-type comprised 5% of the total. 2. From the large sample of direction-selective units, it was found that on—off units were maximally sensitive to only four directions of movement; these directions, in the visual field, were, roughly, anterior, superior, posterior and inferior. The on-type units were maximally sensitive to only three directions: anterior, superior and inferior. 3. The direction-selective unit's responses vary with stimulus velocity; both unit types are more sensitive to velocity change than to absolute speed. On—off units respond to movement at speeds from 6′/sec to 10°/sec; the on-type units responded as slowly as 30″/sec up to about 2°/sec. On-type units are clearly slow-movement detectors. 4. The distribution of direction-selective units depends on the retinal locality. On—off units are more common outside the `visual streak' (area centralis) than within it, while the reverse is true for the on-type units. 5. A stimulus configuration was found which would elicit responses from on-type units when the stimulus was moved in the null direction. This `paradoxical response' was shown to be associated with the silent receptive field surround. 6. The four preferred directions of the on—off units were shown to correspond to the directions of retinal image motion produced by contractions of the four rectus eye muscles. This fact, combined with data on velocity sensitivity and retinal distribution of on—off units, suggests that the on—off units are involved in control of reflex eye movements. 7. The on—off direction-selective units may provide error signals to a visual servo system which minimizes retinal image motion. This hypothesis agrees with the known characteristics of the rabbit's visual following reflexes, specifically, the slow phase of optokinetic nystagmus. PMID:5710424
Small Autonomous Aircraft Servo Health Monitoring
NASA Technical Reports Server (NTRS)
Quintero, Steven
2008-01-01
Small air vehicles offer challenging power, weight, and volume constraints when considering implementation of system health monitoring technologies. In order to develop a testbed for monitoring the health and integrity of control surface servos and linkages, the Autonomous Aircraft Servo Health Monitoring system has been designed for small Uninhabited Aerial Vehicle (UAV) platforms to detect problematic behavior from servos and the air craft structures they control, This system will serve to verify the structural integrity of an aircraft's servos and linkages and thereby, through early detection of a problematic situation, minimize the chances of an aircraft accident. Embry-Riddle Aeronautical University's rotary-winged UAV has an Airborne Power management unit that is responsible for regulating, distributing, and monitoring the power supplied to the UAV's avionics. The current sensing technology utilized by the Airborne Power Management system is also the basis for the Servo Health system. The Servo Health system measures the current draw of the servos while the servos are in Motion in order to quantify the servo health. During a preflight check, deviations from a known baseline behavior can be logged and their causes found upon closer inspection of the aircraft. The erratic behavior nay include binding as a result of dirt buildup or backlash caused by looseness in the mechanical linkages. Moreover, the Servo Health system will allow elusive problems to be identified and preventative measures taken to avoid unnecessary hazardous conditions in small autonomous aircraft.
Programmable Digital Controller
NASA Technical Reports Server (NTRS)
Wassick, Gregory J.
2012-01-01
An existing three-channel analog servo loop controller has been redesigned for piezoelectric-transducer-based (PZT-based) etalon control applications to a digital servo loop controller. This change offers several improvements over the previous analog controller, including software control over proportional-integral-derivative (PID) parameters, inclusion of other data of interest such as temperature and pressure in the control laws, improved ability to compensate for PZT hysteresis and mechanical mount fluctuations, ability to provide pre-programmed scanning and stepping routines, improved user interface, expanded data acquisition, and reduced size, weight, and power.
A web-based solution for 3D medical image visualization
NASA Astrophysics Data System (ADS)
Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo
2015-03-01
In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.
Image communication scheme based on dynamic visual cryptography and computer generated holography
NASA Astrophysics Data System (ADS)
Palevicius, Paulius; Ragulskis, Minvydas
2015-01-01
Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.
Chen, Wentao; Zhang, Weidong
2009-10-01
In an optical disk drive servo system, to attenuate the external periodic disturbances induced by inevitable disk eccentricity, repetitive control has been used successfully. The performance of a repetitive controller greatly depends on the bandwidth of the low-pass filter included in the repetitive controller. However, owing to the plant uncertainty and system stability, it is difficult to maximize the bandwidth of the low-pass filter. In this paper, we propose an optimality based repetitive controller design method for the track-following servo system with norm-bounded uncertainties. By embedding a lead compensator in the repetitive controller, both the system gain at periodic signal's harmonics and the bandwidth of the low-pass filter are greatly increased. The optimal values of the repetitive controller's parameters are obtained by solving two optimization problems. Simulation and experimental results are provided to illustrate the effectiveness of the proposed method.
Synchronous Control Method and Realization of Automated Pharmacy Elevator
NASA Astrophysics Data System (ADS)
Liu, Xiang-Quan
Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.
NASA Astrophysics Data System (ADS)
Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo
2018-01-01
This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.
Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA
1989-01-01
Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less
Visual information mining in remote sensing image archives
NASA Astrophysics Data System (ADS)
Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.
2002-01-01
The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
NASA Astrophysics Data System (ADS)
Ma, Chen-xi; Ding, Guo-qing
2017-10-01
Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.
NASA Astrophysics Data System (ADS)
He, Jun; Gao, Feng; Bai, Yongjun; Wu, Shengfu
2013-11-01
The large capacity servo press is traditionally realized by means of redundant actuation, however there exist the over-constraint problem and interference among actuators, which increases the control difficulty and the product cost. A new type of press mechanism with parallel topology is presented to develop the mechanical servo press with high stamping capacity. The dynamic model considering gravity counterbalance is proposed based on the virtual work principle, and then the effect of counterbalance cylinder on the dynamic performance of the servo press is studied. It is found that the motor torque required to operate the press is a lot less than the others when the ratio of the counterbalance force to the gravity of ram is in the vicinity of 1.0. The stamping force of the real press prototype can reach up to 25 MN on the position of 13 mm away from the bottom dead center. The typical deep-drawing process with 1 200 mm stroke at 8 strokes per minute is proposed by means of five order polynomial. On this process condition, the driving torques are calculated based on the above dynamic model and the torque measuring test is also carried out on the prototype. It is shown that the curve trend of calculation torque is consistent to the measured result and that the average error is less than 15%. The parallel mechanism is introduced into the development of large capacity servo press to avoid the over-constraint and interference of traditional redundant actuation, and its dynamic characteristics with gravity counterbalance are presented.
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.
Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A
2001-01-01
This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.
A modern control theory based algorithm for control of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1987-01-01
A digital computer-based state variable controller was designed and applied to the 70-m antenna axis servos. The general equations and structure of the algorithm and provisions for alternate position error feedback modes to accommodate intertarget slew, encoder referenced tracking, and precision tracking modes are descibed. Development of the discrete time domain control model and computation of estimator and control gain parameters based on closed loop pole placement criteria are discussed. The new algorithm was successfully implemented and tested in the 70-m antenna at Deep Space Network station 63 in Spain.
NASA Technical Reports Server (NTRS)
Lightsey, W. D.; Alhorn, D. C.; Polites, M. E.
1992-01-01
An experiment designed to test the feasibility of using rotating unbalanced-mass (RUM) devices for line and raster scanning gimbaled payloads, while expending very little power is described. The experiment is configured for ground-based testing, but the scan concept is applicable to ground-based, balloon-borne, and space-based payloads, as well as free-flying spacecraft. The servos used in scanning are defined; the electronic hardware is specified; and a computer simulation model of the system is described. Simulation results are presented that predict system performance and verify the servo designs.
NASA Technical Reports Server (NTRS)
Barry, R. K.; Satyapal, S.; Greenhouse, M. A.; Barclay, R.; Amato, D.; Arritt, B.; Brown, G.; Harvey, V.; Holt, C.; Kuhn, J.
2000-01-01
We discuss work in progress on a near-infrared tunable bandpass filter for the Goddard baseline wide field camera concept of the Next Generation Space Telescope (NGST) Integrated Science Instrument Module (ISIM). This filter, the Demonstration Unit for Low Order Cryogenic Etalon (DULCE), is designed to demonstrate a high efficiency scanning Fabry-Perot etalon operating in interference orders 1 - 4 at 30K with a high stability DSP based servo control system. DULCE is currently the only available tunable filter for lower order cryogenic operation in the near infrared. In this application, scanning etalons will illuminate the focal plane arrays with a single order of interference to enable wide field lower resolution hyperspectral imaging over a wide range of redshifts. We discuss why tunable filters are an important instrument component in future space-based observatories.
Modeling of R/C Servo Motor and Application to Underactuated Mechanical Systems
NASA Astrophysics Data System (ADS)
Ishikawa, Masato; Kitayoshi, Ryohei; Wada, Takashi; Maruta, Ichiro; Sugie, Toshiharu
An R/C servo motor is a compact package of a DC geard-motor associated with a position servo controller. They are widely used in small-sized robotics and mechatronics by virtue of their compactness, easiness-to-use and high/weight ratio. However, it is crucial to clarify their internal model (including the embedded position servo) in order to improve control performance of mechatronic systems using R/C servo motors, such as biped robots or underactuted sysyems. In this paper, we propose a simple and realistic internal model of the R/C servo motors including the embedded servo controller, and estimate their physical parameters using continuous-time system identification method. We also provide a model of reference-to-torque transfer function so that we can estimate the internal torque acting on the load.
Image Location Estimation by Salient Region Matching.
Qian, Xueming; Zhao, Yisi; Han, Junwei
2015-11-01
Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.
NASA Technical Reports Server (NTRS)
Teixeira, R. A.; Lackner, J. R.
1979-01-01
An experimental study was conducted on seven normal subjects to evaluate the effectiveness of passive head movements in suppressing the optokinetically-induced illusory self-rotation. Visual simulation was provided by a servo-controlled optokinetic drum. Each subject participated in two experimental sessions. In one condition, the subject's head remained stationary while he gazed passively at a moving stripe pattern. In the other, he gazed passively and relaxed his neck muscles while his head was rotated from side to side. It appears that suppression of optokinetically-induced illusory self-rotation with passive head movements results from the operation of a spatial constancy mechanism interrelating visual, vestibular, and kinesthetic information on ongoing body orientation. The results support the view that optokinetic 'motion sickness' is related, at least in part, to an oculomotor disturbance rather than a visually triggered disturbance of specifically vestibular etiology.
A Graph Based Interface for Representing Volume Visualization Results
NASA Technical Reports Server (NTRS)
Patten, James M.; Ma, Kwan-Liu
1998-01-01
This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.
78 FR 4762 - Airworthiness Directives; Bell Helicopter Textron Canada Limited (Bell) Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... certain hydraulic servo actuator assemblies (servo) for a loose nut, shaft, and clevis assembly, modifying... through 52430, with a hydraulic servo actuator assembly (servo), part number (P/N) 206-076-062-103...) No. 206L-11-169, Revision B, dated August 29, 2011 (ASB). (2) Applying only hand pressure, determine...
A Hydraulic Blowdown Servo System For Launch Vehicle
NASA Astrophysics Data System (ADS)
Chen, Anping; Deng, Tao
2016-07-01
This paper introduced a hydraulic blowdown servo system developed for a solid launch vehicle of the family of Chinese Long March Vehicles. It's the thrust vector control (TVC) system for the first stage. This system is a cold gas blowdown hydraulic servo system and consist of gas vessel, hydraulic reservoir, servo actuator, digital control unit (DCU), electric explosion valve, and pressure regulator etc. A brief description of the main assemblies and characteristics follows. a) Gas vessel is a resin/carbon fiber composite over wrapped pressure vessel with a titanium liner, The volume of the vessel is about 30 liters. b) Hydraulic reservoir is a titanium alloy piston type reservoir with a magnetostrictive sensor as the fluid level indicator. The volume of the reservoir is about 30 liters. c) Servo actuator is a equal area linear piston actuator with a 2-stage low null leakage servo valve and a linear variable differential transducer (LVDT) feedback the piston position, Its stall force is about 120kN. d) Digital control unit (DCU) is a compact digital controller based on digital signal processor (DSP), and deployed dual redundant 1553B digital busses to communicate with the on board computer. e) Electric explosion valve is a normally closed valve to confine the high pressure helium gas. f) Pressure regulator is a spring-loaded poppet pressure valve, and regulates the gas pressure from about 60MPa to about 24MPa. g) The whole system is mounted in the aft skirt of the vehicle. h) This system delivers approximately 40kW hydraulic power, by contrast, the total mass is less than 190kg. the power mass ratio is about 0.21. Have finished the development and the system test. Bench and motor static firing tests verified that all of the performances have met the design requirements. This servo system is complaint to use of the solid launch vehicle.
NASA Technical Reports Server (NTRS)
Corliss, L. D.; Talbot, P. D.
1977-01-01
A two-pilot moving base simulator experiment was conducted to assess the effects of servo failures of a flight control system on the transient dynamics of a Bell UH-1H helicopter. The flight control hardware considered was part of the V/STOLAND system built with control authorities of from 20-40%. Servo hardover and oscillatory failures were simulated in each control axis. Measurements were made to determine the adequacy of the failure monitoring system time delay and the servo center and lock time constant, the pilot reaction times, and the altitude and attitude excursions of the helicopter at hover and 60 knots. Safe recoveries were made from all failures under VFR conditions. Pilot reaction times were from 0.5 to 0.75 sec. Reduction of monitor delay times below these values resulted in significantly reduced excursion envelopes. A subsequent flight test was conducted on a UH-1H helicopter with the V/STOLAND system installed. Series servo hardovers were introduced in hover and at 60 knots straight and level. Data from these tests are included for comparison.
Poster — Thur Eve — 15: Improvements in the stability of the tomotherapy imaging beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belec, J
2014-08-15
Use of helical TomoTherapy based MVCT imaging for adaptive planning requires the image values (HU) to remain stable over the course of treatment. In the past, the image value stability was suboptimal, which required frequent change to the image value to density calibration curve to avoid dose errors on the order of 2–4%. The stability of the image values at our center was recently improved by stabilizing the dose rate of the machine (dose control servo) and performing daily MVCT calibration corrections. In this work, we quantify the stability of the image values over treatment time by comparing patient treatmentmore » image density derived using MVCT and KVCT. The analysis includes 1) MVCT - KVCT density difference histogram, 2) MVCT vs KVCT density spectrum, 3) multiple average profile density comparison and 4) density difference in homogeneous locations. Over two months, the imaging beam stability was compromised several times due to a combination of target wobbling, spectral calibration, target change and magnetron issues. The stability of the image values were analyzed over the same period. Results show that the impact on the patient dose calculation is 0.7% +− 0.6%.« less
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Through-barrier electromagnetic imaging with an atomic magnetometer.
Deans, Cameron; Marmugi, Luca; Renzoni, Ferruccio
2017-07-24
We demonstrate the penetration of thick metallic and ferromagnetic barriers for imaging of conductive targets underneath. Our system is based on an 85 Rb radio-frequency atomic magnetometer operating in electromagnetic induction imaging modality in an unshielded environment. Detrimental effects, including unpredictable magnetic signatures from ferromagnetic screens and variations in the magnetic background, are automatically compensated by active compensation coils controlled by servo loops. We exploit the tunability and low-frequency sensitivity of the atomic magnetometer to directly image multiple conductive targets concealed by a 2.5 mm ferromagnetic steel shield and/or a 2.0 mm aluminium shield, in a single scan. The performance of the atomic magnetometer allows imaging without any prior knowledge of the barriers or the targets, and without the need of background subtraction. A dedicated edge detection algorithm allows automatic estimation of the targets' size within 3.3 mm and of their position within 2.4 mm. Our results prove the feasibility of a compact, sensitive and automated sensing platform for imaging of concealed objects in a range of applications, from security screening to search and rescue.
Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas
Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline
2013-01-01
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris. PMID:23429581
Autonomous docking based on infrared system for electric vehicle charging in urban areas.
Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline
2013-02-21
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Robust control for a biaxial servo with time delay system based on adaptive tuning technique.
Chen, Tien-Chi; Yu, Chih-Hsien
2009-07-01
A robust control method for synchronizing a biaxial servo system motion is proposed in this paper. A new network based cross-coupled control and adaptive tuning techniques are used together to cancel out the skew error. The conventional fixed gain PID cross-coupled controller (CCC) is replaced with the adaptive cross-coupled controller (ACCC) in the proposed control scheme to maintain biaxial servo system synchronization motion. Adaptive-tuning PID (APID) position and velocity controllers provide the necessary control actions to maintain synchronization while following a variable command trajectory. A delay-time compensator (DTC) with an adaptive controller was augmented to set the time delay element, effectively moving it outside the closed loop, enhancing the stability of the robust controlled system. This scheme provides strong robustness with respect to uncertain dynamics and disturbances. The simulation and experimental results reveal that the proposed control structure adapts to a wide range of operating conditions and provides promising results under parameter variations and load changes.
Geostationary Operational Environmental Satellite (GOES-N report). Volume 2: Technical appendix
NASA Technical Reports Server (NTRS)
1992-01-01
The contents include: operation with inclinations up to 3.5 deg to extend life; earth sensor improvements to reduce noise; sensor configurations studied; momentum management system design; reaction wheel induced dynamic interaction; controller design; spacecraft motion compensation; analog filtering; GFRP servo design - modern control approach; feedforward compensation as applied to GOES-1 sounder; discussion of allocation of navigation, inframe registration and image-to-image error budget overview; and spatial response and cloud smearing study.
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
Stapf, Daniel; Franke, Andreas; Schreckenberg, Marcus; Schummers, Georg; Mischke, Karl; Marx, Nikolaus; Schauerte, Patrick; Knackstedt, Christian
2013-04-01
Three-dimensional (3D)-imaging provides important information on cardiac anatomy during electrophysiological procedures. Real-time updates of modalities with high soft-tissue contrast are particularly advantageous during cardiac procedures. Therefore, a beat to beat 3D visualization of cardiac anatomy by intracardiac echocardiography (ICE) was developed and tested in phantoms and animals. An electronic phased-array 5-10 MHz ICE-catheter (Acuson, AcuNav/Siemens Medical Solutions USA/64 elements) providing a 90° sector image was used for ICE-imaging. A custom-made mechanical prototype controlled by a servo motor allowed automatic rotation of the ICE-catheter around its longitudinal axis. During a single heartbeat, the ICE-catheter was rotated and 2D-images were acquired. Reconstruction into a 3D volume and rendering by a prototype software was performed beat to beat. After experimental validation using a rigid phantom, the system was tested in an animal study and afterwards, for quantitative validation, in a dynamic phantom. Acquisition of beat to beat 3D-reconstruction was technically feasible. However, twisting of the ICE-catheter shaft due to friction and torsion was found and rotation was hampered. Also, depiction of catheters was not always ensured in case of parallel alignment. Using a curved sheath for depiction of cardiac anatomy there was no congruent depiction of shape and dimension of static and moving objects. Beat to beat 3D-ICE-imaging is feasible. However, shape and dimension of static and moving objects cannot always be displayed with necessary steadiness as needed in the clinical setting. As catheter depiction is also limited, clinical use seems impossible.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Image Statistics and the Representation of Material Properties in the Visual Cortex
Baumgartner, Elisabeth; Gegenfurtner, Karl R.
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714
Image Statistics and the Representation of Material Properties in the Visual Cortex.
Baumgartner, Elisabeth; Gegenfurtner, Karl R
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.
Visual Tour Based on Panaromic Images for Indoor Places in Campus
NASA Astrophysics Data System (ADS)
Bakirman, T.
2012-07-01
In this paper, it is aimed to create a visual tour based on panoramic images for Civil Engineering Faculty in Yildiz Technical University. For this purpose, panoramic images should be obtained. Thus, photos taken with a tripod to have the same angle of view in every photo and panoramic images were created with stitching photos. Two different cameras with different focal length were used. With the panoramic images, visual tour with navigation tools created.
Novel All Digital Ring Cavity Locking Servo
NASA Astrophysics Data System (ADS)
Baker, J.; Gallant, D.; Lucero, A.; Miller, H.; Stohs, J.
We plan to use this servo in the new 50W 589-nm sodium guidestar laser to be installed in the AMOS facility in July 2010. Though the basic design is unchanged from the successful Hillman/Denman design, numerous improvements are being implemented in order to bring the device even further out of the lab and into the field. The basic building block of the Hillman/Denman design are two low noise master oscillators that are injected into higher power slave oscillators that are locked to the frequencies of the master oscillator cavities. In the previous system a traditional analog Pound-Drever-Hall (PDH) loop was employed to provide the frequency locking. Analog servos work well, in general, but robust locking for a complex set of multiply-interconnected PDH servos in the guidestar source challenges existing analog approaches. One of the significant changes demonstrated thus far is the implementation of an all-digital servo using only COTS components and a fast CISC processing architecture for orchestrating the basic PDH loops active within system. Compared to the traditionally used analog servo loops, an all-digital servo is a not only an orders-of-magnitude simpler servo loop to implement but the control loop can be modified by merely changing the computer code. Field conditions are often different from laboratory conditions, requiring subtle algorithm changes, and physical accessibility in the field is generally limited and difficult. Remotely implemented, trimmer-less and solderless servo upgrades are a much welcomed improvement in the field installed guidestar system. Also, OEM replacement of usual benchtop components saves considerable space and weight as well in the locking system. We will report on the details of the servo system and recent experimental results locking a master-slave laser oscillator system using the all-digital Pound-Drever-Hall loop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
High precision tracking control of a servo gantry with dynamic friction compensation.
Zhang, Yangming; Yan, Peng; Zhang, Zhen
2016-05-01
This paper is concerned with the tracking control problem of a voice coil motor (VCM) actuated servo gantry system. By utilizing an adaptive control technique combined with a sliding mode approach, an adaptive sliding mode control (ASMC) law with friction compensation scheme is proposed in presence of both frictions and external disturbances. Based on the LuGre dynamic friction model, a dual-observer structure is used to estimate the unmeasurable friction state, and an adaptive control law is synthesized to effectively handle the unknown friction model parameters as well as the bound of the disturbances. Moreover, the proposed control law is also implemented on a VCM servo gantry system for motion tracking. Simulations and experimental results demonstrate good tracking performance, which outperform traditional control approaches. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
Kirchoff, Bruce K; Leggett, Roxanne; Her, Va; Moua, Chue; Morrison, Jessica; Poole, Chamika
2011-01-01
Advances in digital imaging have made possible the creation of completely visual keys. By a visual key we mean a key based primarily on images, and that contains a minimal amount of text. Characters in visual keys are visually, not verbally defined. In this paper we create the first primarily visual key to a group of taxa, in this case the Fagaceae of the southeastern USA. We also modify our recently published set of best practices for image use in illustrated keys to make them applicable to visual keys. Photographs of the Fagaceae were obtained from internet and herbarium databases or were taken specifically for this project. The images were printed and then sorted into hierarchical groups. These hierarchical groups of images were used to create the 'couplets' in the key. A reciprocal process of key creation and testing was used to produce the final keys. Four keys were created, one for each of the parts-leaves, buds, fruits and bark. Species description pages consisting of multiple images were also created for each of the species in the key. Creation and testing of the key resulted in a modified list of best practices for image use visual keys. The inclusion of images into paper and electronic keys has greatly increased their ease of use. However, virtually all of these keys are still based upon verbally defined, atomistic characters. The creation of primarily visual keys allows us to overcome the well-known limitations of linguistic-based characters and create keys that are much easier to use, especially for botanical novices.
Catching What We Can't See: Manual Interception of Occluded Fly-Ball Trajectories
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. PMID:23166653
Catching what we can't see: manual interception of occluded fly-ball trajectories.
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories.
Fault Detection and Severity Analysis of Servo Valves Using Recurrence Quantification Analysis
2014-10-02
Fault Detection and Severity Analysis of Servo Valves Using Recurrence Quantification Analysis M. Samadani1, C. A. Kitio Kwuimy2, and C. Nataraj3...diagnostics of nonlinear systems. A detailed nonlinear math- ematical model of a servo electro-hydraulic system has been used to demonstrate the procedure...Two faults have been considered associated with the servo valve including the in- creased friction between spool and sleeve and the degradation of the
Operating manual for the miniservo-control tester
Rapp, W.L.
1986-01-01
Ever since the implementation of servo-control units (regular and minimodels) with manometers at U. S. Geological Survey streamflow stations, the need for an effective and efficient servo-control unit tester has been paramount among field personnel. In numerous cases, servo-control unit failures were blamed on battery failures and vice versa. There was no valid instrument to definitively identify cause of failure, let alone properly diagnose the servo-control/manometer system. In 1983, two servo-control unit testers were developed and fabricated. One was mechanical in fabrication, operation, and serviceability; the other was electronic. The testers were extensively used and evaluated in Maine, Ohio, Kansas, and Louisiana under a wide range of environmental conditions. The consensus to integrate the best aspects of both testers into one instrument allowed the Survey to finally solve its long-time need for an effective, efficient servo-control unit tester. (USGS)
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Franck, J.V.; Broadhead, P.S.; Skiff, E.W.
1959-07-14
A semiautomatic measuring projector particularly adapted for measurement of the coordinates of photographic images of particle tracks as prcduced in a bubble or cloud chamber is presented. A viewing screen aids the operator in selecting a particle track for measurement. After approximate manual alignment, an image scanning system coupled to a servo control provides automatic exact alignment of a track image with a reference point. The apparatus can follow along a track with a continuous motion while recording coordinate data at various selected points along the track. The coordinate data is recorded on punched cards for subsequent computer calculation of particle trajectory, momentum, etc.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R
2015-10-01
This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.
Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-01-01
Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398
Dual arm master controller for a bilateral servo-manipulator
Kuban, Daniel P.; Perkins, Gerald S.
1989-01-01
A master controller for a mechanically dissimilar bilateral slave servo-manipulator is disclosed. The master controller includes a plurality of drive trains comprising a plurality of sheave arrangements and cables for controlling upper and lower degrees of master movement. The cables and sheaves of the master controller are arranged to effect kinematic duplication of the slave servo-manipulator, despite mechanical differences therebetween. A method for kinematically matching a master controller to a slave servo-manipulator is also disclosed.
Compensating Unknown Time-Varying Delay in Opto-Electronic Platform Tracking Servo System.
Xie, Ruihong; Zhang, Tao; Li, Jiaquan; Dai, Ming
2017-05-09
This paper investigates the problem of compensating miss-distance delay in opto-electronic platform tracking servo system. According to the characteristic of LOS (light-of-sight) motion, we setup the Markovian process model and compensate this unknown time-varying delay by feed-forward forecasting controller based on robust H∞ control. Finally, simulation based on double closed-loop PI (Proportion Integration) control system indicates that the proposed method is effective for compensating unknown time-varying delay. Tracking experiments on the opto-electronic platform indicate that RMS (root-mean-square) error is 1.253 mrad when tracking 10° 0.2 Hz signal.
Taubman, Matthew S [Richland, WA
2005-03-15
Among the embodiments of the present invention is an apparatus that includes a transistor (30), a servo device (40), and a current source (50). The servo device (40) is operable to provide a common base mode of operation of the transistor (30) by maintaining an approximately constant voltage level at the transistor base (32b). The current source (150) is operable to provide a bias current to the transistor (30). A first device (24) provides an input signal to an electrical node (70) positioned between the emitter (32e) of the transistor (30) and the current source (50). A second device (26) receives an output signal from the collector (32c) of the transistor (30).
Sinusoidal visuomotor tracking: intermittent servo-control or coupled oscillations?
Russell, D M; Sternad, D
2001-12-01
In visuomotor tasks that involve accuracy demands, small directional changes in the trajectories have been taken as evidence of feedback-based error corrections. In the present study variability, or intermittency, in visuomanual tracking of sinusoidal targets was investigated. Two lines of analyses were pursued: First, the hypothesis that humans fundamentally act as intermittent servo-controllers was re-examined, probing the question of whether discontinuities in the movement trajectory directly imply intermittent control. Second, an alternative hypothesis was evaluated: that rhythmic tracking movements are generated by entrainment between the oscillations of the target and the actor, such that intermittency expresses the degree of stability. In 2 experiments, participants (N = 6 in each experiment) swung 1 of 2 different hand-held pendulums, tracking a rhythmic target that oscillated at different frequencies with a constant amplitude. In 1 line of analyses, the authors tested the intermittency hypothesis by using the typical kinematic error measures and spectral analysis. In a 2nd line, they examined relative phase and its variability, following analyses of rhythmic interlimb coordination. The results showed that visually guided corrective processes play a role, especially for slow movements. Intermittency, assessed as frequency and power components of the movement trajectory, was found to change as a function of both target frequency and the manipulandum's inertia. Support for entrainment was found in conditions in which task frequency was identical to or higher than the effector's eigenfrequency. The results suggest that it is the symmetry between task and effector that determines which behavioral regime is dominant.
Scheiding, Sebastian; Yi, Allen Y; Gebhardt, Andreas; Li, Lei; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas
2011-11-21
We report what is to our knowledge the first approach to diamond turn microoptical lens array on a steep curved substrate by use of a voice coil fast tool servo. In recent years ultraprecision machining has been employed to manufacture accurate optical components with 3D structure for beam shaping, imaging and nonimaging applications. As a result, geometries that are difficult or impossible to manufacture using lithographic techniques might be fabricated using small diamond tools with well defined cutting edges. These 3D structures show no rotational symmetry, but rather high frequency asymmetric features thus can be treated as freeform geometries. To transfer the 3D surface data with the high frequency freeform features into a numerical control code for machining, the commonly piecewise differentiable surfaces are represented as a cloud of individual points. Based on this numeric data, the tool radius correction is calculated to account for the cutting-edge geometry. Discontinuities of the cutting tool locations due to abrupt slope changes on the substrate surface are bridged using cubic spline interpolation.When superimposed with the trajectory of the rotationally symmetric substrate the complete microoptical geometry in 3D space is established. Details of the fabrication process and performance evaluation are described. © 2011 Optical Society of America
Anti-disturbance rapid vibration suppression of the flexible aerial refueling hose
NASA Astrophysics Data System (ADS)
Su, Zikang; Wang, Honglun; Li, Na
2018-05-01
As an extremely dangerous phenomenon in autonomous aerial refueling (AAR), the flexible refueling hose vibration caused by the receiver aircraft's excessive closure speed should be suppressed once it appears. This paper proposed a permanent magnet synchronous motor (PMSM) based refueling hose servo take-up system for the vibration suppression of the flexible refueling hose. A rapid back-stepping based anti-disturbance nonsingular fast terminal sliding mode (NFTSM) control scheme with a specially established finite-time convergence NFTSM observer is proposed for the PMSM based hose servo take-up system under uncertainties and disturbances. The unmeasured load torque and other disturbances in the PMSM system are reconstituted by the NFTSM observer and to be compensated during the controller design. Then, with the back-stepping technique, a rapid anti-disturbance NFTSM controller is proposed for the PMSM angular tracking to improve the tracking error convergence speed and tracking precision. The proposed vibration suppression scheme is then applied to PMSM based hose servo take-up system for the refueling hose vibration suppression in AAR. Simulation results show the proposed scheme can suppress the hose vibration rapidly and accurately even the system is exposed to strong uncertainties and probe position disturbances, it is more competitive in tracking accuracy, tracking error convergence speed and robustness.
Modeling and stability of electro-hydraulic servo of hydraulic excavator
NASA Astrophysics Data System (ADS)
Jia, Wenhua; Yin, Chenbo; Li, Guo; Sun, Menghui
2017-11-01
The condition of the hydraulic excavator is complicated and the working environment is bad. The safety and stability of the control system is influenced by the external factors. This paper selects hydraulic excavator electro-hydraulic servo system as the research object. A mathematical model and simulation model using AMESIM of servo system is established. Then the pressure and flow characteristics are analyzed. The design and optimization of electro-hydraulic servo system and its application in engineering machinery is provided.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
NASA Astrophysics Data System (ADS)
Wolf, Ivo; Nolden, Marco; Schwarz, Tobias; Meinzer, Hans-Peter
2010-02-01
The Medical Imaging Interaction Toolkit (MITK) and the eXtensible Imaging Platform (XIP) both aim at facilitating the development of medical imaging applications, but provide support on different levels. MITK offers support from the toolkit level, whereas XIP comes with a visual programming environment. XIP is strongly based on Open Inventor. Open Inventor with its scene graph-based rendering paradigm was not specifically designed for medical imaging, but focuses on creating dedicated visualizations. MITK has a visualization concept with a model-view-controller like design that assists in implementing multiple, consistent views on the same data, which is typically required in medical imaging. In addition, MITK defines a unified means of describing position, orientation, bounds, and (if required) local deformation of data and views, supporting e.g. images acquired with gantry tilt and curved reformations. The actual rendering is largely delegated to the Visualization Toolkit (VTK). This paper presents an approach of how to integrate the visualization concept of MITK with XIP, especially into the XIP-Builder. This is a first step of combining the advantages of both platforms. It enables experimenting with algorithms in the XIP visual programming environment without requiring a detailed understanding of Open Inventor. Using MITK-based add-ons to XIP, any number of data objects (images, surfaces, etc.) produced by algorithms can simply be added to an MITK DataStorage object and rendered into any number of slice-based (2D) or 3D views. Both MITK and XIP are open-source C++ platforms. The extensions presented in this paper will be available from www.mitk.org.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing
2014-09-01
In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Woehrle, Holger; Cowie, Martin R; Eulenburg, Christine; Suling, Anna; Angermann, Christiane; d'Ortho, Marie-Pia; Erdmann, Erland; Levy, Patrick; Simonds, Anita K; Somers, Virend K; Zannad, Faiez; Teschler, Helmut; Wegscheider, Karl
2017-08-01
This on-treatment analysis was conducted to facilitate understanding of mechanisms underlying the increased risk of all-cause and cardiovascular mortality in heart failure patients with reduced ejection fraction and predominant central sleep apnoea randomised to adaptive servo ventilation versus the control group in the SERVE-HF trial.Time-dependent on-treatment analyses were conducted (unadjusted and adjusted for predictive covariates). A comprehensive, time-dependent model was developed to correct for asymmetric selection effects (to minimise bias).The comprehensive model showed increased cardiovascular death hazard ratios during adaptive servo ventilation usage periods, slightly lower than those in the SERVE-HF intention-to-treat analysis. Self-selection bias was evident. Patients randomised to adaptive servo ventilation who crossed over to the control group were at higher risk of cardiovascular death than controls, while control patients with crossover to adaptive servo ventilation showed a trend towards lower risk of cardiovascular death than patients randomised to adaptive servo ventilation. Cardiovascular risk did not increase as nightly adaptive servo ventilation usage increased.On-treatment analysis showed similar results to the SERVE-HF intention-to-treat analysis, with an increased risk of cardiovascular death in heart failure with reduced ejection fraction patients with predominant central sleep apnoea treated with adaptive servo ventilation. Bias is inevitable and needs to be taken into account in any kind of on-treatment analysis in positive airway pressure studies. Copyright ©ERS 2017.
Learning to rank using user clicks and visual features for image retrieval.
Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong
2015-04-01
The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-01-01
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-02-12
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Evaluating transient performance of servo mechanisms by analysing stator current of PMSM
NASA Astrophysics Data System (ADS)
Zhang, Qing; Tan, Luyao; Xu, Guanghua
2018-02-01
Smooth running and rapid response are the desired performance goals for the transient motions of servo mechanisms. Because of the uncertain and unobservable transient behaviour of servo mechanisms, it is difficult to evaluate their transient performance. Under the effects of electromechanical coupling, the stator current signals of a permanent-magnet synchronous motor (PMSM) potentially contain the performance information regarding servo mechanisms in use. In this paper, a novel method based on analysing the stator current of the PMSM is proposed for quantifying the transient performance. First, a vector control model is constructed to simulate the stator current behaviour in the transient processes of consecutive speed changes, consecutive load changes, and intermittent start-stops. It is discovered that the amplitude and frequency of the stator current are modulated by the transient load torque and motor speed, respectively. The stator currents under different performance conditions are also simulated and compared. Then, the stator current is processed using a local means decomposition (LMD) algorithm to extract the instantaneous amplitude and instantaneous frequency. The sample entropy of the instantaneous amplitude, which reflects the complexity of the load torque variation, is calculated as a performance indicator of smooth running. The peak-to-peak value of the instantaneous frequency, which defines the range of the motor speed variation, is set as a performance indicator of rapid response. The proposed method is applied to both simulated data in an intermittent start-stops process and experimental data measured for a batch of servo turrets for turning lathes. The results show that the performance evaluations agree with the actual performance.
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Hyperspectral image visualization based on a human visual model
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.
2008-02-01
Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.
Large aperture freeform VIS telescope with smart alignment approach
NASA Astrophysics Data System (ADS)
Beier, Matthias; Fuhlrott, Wilko; Hartung, Johannes; Holota, Wolfgang; Gebhardt, Andreas; Risse, Stefan
2016-07-01
The development of smart alignment and integration strategies for imaging mirror systems to be used within astronomical instrumentation are especially important with regard to the increasing impact of non-rotationally symmetric optics. In the present work, well-known assembly approaches preferentially applied in the course of infrared instrumentation are transferred to visible applications and are verified during the integration of an anamorphic imaging telescope breadboard. The four mirror imaging system is based on a modular concept using mechanically fixed arrangements of each two freeform surfaces, generated by servo assisted diamond machining and corrected using Magnetorheological Finishing as a figuring and smoothing step. Surface testing include optical CGH interferometry as well as tactile profilometry and is conducted with respect to diamond milled fiducials at the mirror bodies. A strict compliance of surface referencing during all significant fabrication steps allow for an easy integration and direct measurement of the system's wave aberration after initial assembly. The achievable imaging performance, as well as influences of the tight tolerance budget and mid-spatial frequency errors, are discussed and experimentally evaluated.
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Occam's razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2005-01-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Occam"s razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2004-12-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
78 FR 42406 - Airworthiness Directives; Eurocopter France Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-16
... 3 of the Rotorcraft Flight Manual. Many of the non-compliant servo-controls were installed by the... Emergency AD, we have discovered that the servo-control's component history card or equivalent record may... servo-controls with a non-compliant input lever bearing be replaced and returned to the manufacturer. AD...
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Visual Contrast Enhancement Algorithm Based on Histogram Equalization
Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching
2015-01-01
Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219
Combining textual and visual information for image retrieval in the medical domain.
Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore
2011-01-01
In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).
Luo, Ying; Chen, Yangquan; Pi, Youguo
2010-10-01
Cogging effect which can be treated as a type of position-dependent periodic disturbance, is a serious disadvantage of the permanent magnetic synchronous motor (PMSM). In this paper, based on a simulation system model of PMSM position servo control, the cogging force, viscous friction, and applied load in the real PMSM control system are considered and presented. A dual high-order periodic adaptive learning compensation (DHO-PALC) method is proposed to minimize the cogging effect on the PMSM position and velocity servo system. In this DHO-PALC scheme, more than one previous periods stored information of both the composite tracking error and the estimate of the cogging force is used for the control law updating. Asymptotical stability proof with the proposed DHO-PALC scheme is presented. Simulation is implemented on the PMSM servo system model to illustrate the proposed method. When the constant speed reference is applied, the DHO-PALC can achieve a faster learning convergence speed than the first-order periodic adaptive learning compensation (FO-PALC). Moreover, when the designed reference signal changes periodically, the proposed DHO-PALC can obtain not only faster convergence speed, but also much smaller final error bound than the FO-PALC. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Jianhua; Zhou, Songlin; Lu, Xianghui; Gao, Dianrong
2015-09-01
The double flapper-nozzle servo valve is widely used to launch and guide the equipment. Due to the large instantaneous flow rate of servo valve working under specific operating conditions, the temperature of servo valve would reach 120°C and the valve core and valve sleeve deform in a short amount of time. So the control precision of servo valve significantly decreases and the clamping stagnation phenomenon of valve core appears. In order to solve the problem of degraded control accuracy and clamping stagnation of servo valve under large temperature difference circumstance, the numerical simulation of heat-fluid-solid coupling by using finite element method is done. The simulation result shows that zero position leakage of servo valve is basically impacted by oil temperature and change of fit clearance. The clamping stagnation is caused by warpage-deformation and fit clearance reduction of the valve core and valve sleeve. The distribution rules of the temperature and thermal-deformation of shell, valve core and valve sleeve and the pressure, velocity and temperature field of flow channel are also analyzed. Zero position leakage and electromagnet's current when valve core moves in full-stroke are tested using Electro-hydraulic Servo-valve Characteristic Test-bed of an aerospace sciences and technology corporation. The experimental results show that the change law of experimental current at different oil temperatures is roughly identical to simulation current. The current curve of the electromagnet is smooth when oil temperature is below 80°C, but the amplitude of current significantly increases and the hairy appears when oil temperature is above 80°C. The current becomes smooth again after the warped valve core and valve sleeve are reground. It indicates that clamping stagnation is caused by warpage-deformation and fit clearance reduction of valve core and valve sleeve. This paper simulates and tests the heat-fluid-solid coupling of double flapper-nozzle servo valve, and the obtained results provide the reference value for the design of double flapper-nozzle force feedback servo valve.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
2015-03-01
SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new
Development of the Software for 30 inch Telescope Control System at KHAO
NASA Astrophysics Data System (ADS)
Mun, B.-S.; Kim, S.-J.; Jang, M.; Min, S.-W.; Seol, K.-H.; Moon, K.-S.
2006-12-01
Even though 30inch optical telescope at Kyung Hee Astronomy Observatory has been used to produce a series of scientific achievements since its first light in 1992, numerous difficulties in the operation of the telescope have hindered the precise observations needed for further researches. Since the currently used PC-TCS (Personal Computer based Telescope Control system) software based on ISA-bus type is outdated, it doesn't have a user friendly interface and make it impossible to scale. Also accumulated errors which are generated by discordance from input and output signals into a motion controller required new control system. Thus we have improved the telescope control system by updating software and modifying mechanical parts. We applied a new BLDC (brushless DC) servo motor system to the mechanical parts of the telescope and developed a control software using Visual Basic 6.0. As a result, we could achieve a high accuracy in controlling of the telescope and use the userfriendly GUI (Graphic User Interface).
NASA Astrophysics Data System (ADS)
Chen, Syuan-Yi; Gong, Sheng-Sian
2017-09-01
This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.
An open source digital servo for atomic, molecular, and optical physics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leibrandt, D. R., E-mail: david.leibrandt@nist.gov; Heidecker, J.
2015-12-15
We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of themore » laser used to probe the narrow clock transition of {sup 27}Al{sup +} in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.« less
An open source digital servo for atomic, molecular, and optical physics experiments.
Leibrandt, D R; Heidecker, J
2015-12-01
We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of (27)Al(+) in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.
An open source digital servo for atomic, molecular, and optical physics experiments
NASA Astrophysics Data System (ADS)
Leibrandt, D. R.; Heidecker, J.
2015-12-01
We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.
An open source digital servo for atomic, molecular, and optical physics experiments
Leibrandt, D. R.; Heidecker, J.
2016-01-01
We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser. PMID:26724014
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network
NASA Astrophysics Data System (ADS)
Ong, Jia Jan; Ang, L.-M.; Seng, K. P.
This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
Experimental research of flow servo-valve
NASA Astrophysics Data System (ADS)
Takosoglu, Jakub
Positional control of pneumatic drives is particularly important in pneumatic systems. Some methods of positioning pneumatic cylinders for changeover and tracking control are known. Choking method is the most development-oriented and has the greatest potential. An optimal and effective method, particularly when applied to pneumatic drives, has been searched for a long time. Sophisticated control systems with algorithms utilizing artificial intelligence methods are designed therefor. In order to design the control algorithm, knowledge about real parameters of servo-valves used in control systems of electro-pneumatic servo-drives is required. The paper presents the experimental research of flow servo-valve.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Active-passive hybrid piezoelectric actuators for high-precision hard disk drive servo systems
NASA Astrophysics Data System (ADS)
Chan, Kwong Wah; Liao, Wei-Hsin
2006-03-01
Positioning precision is crucial to today's increasingly high-speed, high-capacity, high data density, and miniaturized hard disk drives (HDDs). The demand for higher bandwidth servo systems that can quickly and precisely position the read/write head on a high track density becomes more pressing. Recently, the idea of applying dual-stage actuators to track servo systems has been studied. The push-pull piezoelectric actuated devices have been developed as micro actuators for fine and fast positioning, while the voice coil motor functions as a large but coarse seeking. However, the current dual-stage actuator design uses piezoelectric patches only without passive damping. In this paper, we propose a dual-stage servo system using enhanced active-passive hybrid piezoelectric actuators. The proposed actuators will improve the existing dual-stage actuators for higher precision and shock resistance, due to the incorporation of passive damping in the design. We aim to develop this hybrid servo system not only to increase speed of track seeking but also to improve precision of track following servos in HDDs. New piezoelectrically actuated suspensions with passive damping have been designed and fabricated. In order to evaluate positioning and track following performances for the dual-stage track servo systems, experimental efforts are carried out to implement the synthesized active-passive suspension structure with enhanced piezoelectric actuators using a composite nonlinear feedback controller.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
A Quasiphysics Intelligent Model for a Long Range Fast Tool Servo
Liu, Qiang; Zhou, Xiaoqin; Lin, Jieqiong; Xu, Pengzi; Zhu, Zhiwei
2013-01-01
Accurately modeling the dynamic behaviors of fast tool servo (FTS) is one of the key issues in the ultraprecision positioning of the cutting tool. Herein, a quasiphysics intelligent model (QPIM) integrating a linear physics model (LPM) and a radial basis function (RBF) based neural model (NM) is developed to accurately describe the dynamic behaviors of a voice coil motor (VCM) actuated long range fast tool servo (LFTS). To identify the parameters of the LPM, a novel Opposition-based Self-adaptive Replacement Differential Evolution (OSaRDE) algorithm is proposed which has been proved to have a faster convergence mechanism without compromising with the quality of solution and outperform than similar evolution algorithms taken for consideration. The modeling errors of the LPM and the QPIM are investigated by experiments. The modeling error of the LPM presents an obvious trend component which is about ±1.15% of the full span range verifying the efficiency of the proposed OSaRDE algorithm for system identification. As for the QPIM, the trend component in the residual error of LPM can be well suppressed, and the error of the QPIM maintains noise level. All the results verify the efficiency and superiority of the proposed modeling and identification approaches. PMID:24163627
Perceptual asymmetries in greyscales: object-based versus space-based influences.
Thomas, Nicole A; Elias, Lorin J
2012-05-01
Neurologically normal individuals exhibit leftward spatial biases, resulting from object- and space-based biases; however their relative contributions to the overall bias remain unknown. Relative position within the display has not often been considered, with similar spatial conditions being collapsed across. Study 1 used the greyscales task to investigate the influence of relative position and object- and space-based contributions. One image in each greyscale pair was shifted towards the left or the right. A leftward object-based bias moderated by a bias to the centre was expected. Results confirmed this as a left object-based bias occurred in the right visual field, where the left side of the greyscale pairs was located in the centre visual field. Further, only lower visual field images exhibited a significant left bias in the left visual field. The left bias was also stronger when images were partially overlapping in the right visual field, demonstrating the importance of examining proximity. The second study examined whether object-based biases were stronger when actual objects, with directional lighting biases, were used. Direction of luminosity was congruent or incongruent with spatial location. A stronger object-based bias emerged overall; however a leftward bias was seen in congruent conditions and a rightward bias was seen in incongruent conditions. In conditions with significant biases, the lower visual field image was chosen most often. Results show that object- and space-based biases both contribute; however stimulus type allows either space- or object-based biases to be stronger. A lower visual field bias also interacts with these biases, leading the left bias to be eliminated under certain conditions. The complex interaction occurring between frame of reference and visual field makes spatial location extremely important in determining the strength of the leftward bias. Copyright © 2010 Elsevier Srl. All rights reserved.
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
Froeling, Vera; Heimann, Uwe; Huebner, Ralf-Harto; Kroencke, Thomas J; Maurer, Martin H; Doellinger, Felix; Geisel, Dominik; Hamm, Bernd; Brenner, Winfried; Schreiter, Nils F
2015-07-01
To evaluate the utility of attenuation correction (AC) of V/P SPECT images for patients with pulmonary emphysema. Twenty-one patients (mean age 67.6 years) with pulmonary emphysema who underwent V/P SPECT/CT were included. AC/non-AC V/P SPECT images were compared visually and semiquantitatively. Visual comparison of AC/non-AC images was based on a 5-point likert scale. Semiquantitative comparison assessed absolute counts per lung (aCpLu) and lung lobe (aCpLo) for AC/non-AC images using software-based analysis; percentage counts (PC = (aCpLo/aCpLu) × 100) were calculated. Correlation between AC/non-AC V/P SPECT images was analyzed using Spearman's rho correlation coefficient; differences were tested for significance with the Wilcoxon rank sum test. Visual analysis revealed high conformity for AC and non-AC V/P SPECT images. Semiquantitative analysis of PC in AC/non-AC images had an excellent correlation and showed no significant differences in perfusion (ρ = 0.986) or ventilation (ρ = 0.979, p = 0.809) SPECT/CT images. AC of V/P SPECT images for lung lobe-based function imaging in patients with pulmonary emphysema do not improve visual or semiquantitative image analysis.
Li, Qiang; Liu, Hao-Li; Chen, Wen-Shiang
2013-01-01
Previous studies developed ultrasound temperature-imaging methods based on changes in backscattered energy (CBE) to monitor variations in temperature during hyperthermia. In conventional CBE imaging, tracking and compensation of the echo shift due to temperature increase need to be done. Moreover, the CBE image does not enable visualization of the temperature distribution in tissues during nonuniform heating, which limits its clinical application in guidance of tissue ablation treatment. In this study, we investigated a CBE imaging method based on the sliding window technique and the polynomial approximation of the integrated CBE (ICBEpa image) to overcome the difficulties of conventional CBE imaging. We conducted experiments with tissue samples of pork tenderloin ablated by microwave irradiation to validate the feasibility of the proposed method. During ablation, the raw backscattered signals were acquired using an ultrasound scanner for B-mode and ICBEpa imaging. The experimental results showed that the proposed ICBEpa image can visualize the temperature distribution in a tissue with a very good contrast. Moreover, tracking and compensation of the echo shift were not necessary when using the ICBEpa image to visualize the temperature profile. The experimental findings suggested that the ICBEpa image, a new CBE imaging method, has a great potential in CBE-based imaging of hyperthermia and other thermal therapies. PMID:24260041
Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging
NASA Astrophysics Data System (ADS)
Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.
2013-02-01
In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Thomas, K A; Burr, R
1999-06-01
Incubator thermal environments produced by skin versus air servo-control were compared. Infant abdominal skin and incubator air temperatures were recorded from 18 infants in skin servo-control and 14 infants in air servo-control (26- to 29-week gestational age, 14 +/- 2 days postnatal age) for 24 hours. Differences in incubator and infant temperature, neutral thermal environment (NTE) maintenance, and infant and incubator circadian rhythm were examined using analysis of variance and scatterplots. Skin servo-control resulted in more variable air temperature, yet more stable infant temperature, and more time within the NTE. Circadian rhythm of both infant and incubator temperature differed by control mode and the relationship between incubator and infant temperature rhythms was a function of control mode. The differences between incubator control modes extend beyond temperature stability and maintenance of NTE. Circadian rhythm of incubator and infant temperatures is influenced by incubator control.
Zhu, Zhiwei; To, Suet; Zhang, Shaojian
2015-09-01
The inherent residual tool marks (RTM) with particular patterns highly affect optical functions of the generated freeform optics in fast tool servo or slow tool servo (FTS/STS) diamond turning. In the present study, a novel biaxial servo assisted fly cutting (BSFC) method is developed for flexible control of the RTM to be a functional micro/nanotexture in freeform optics generation, which is generally hard to achieve in FTS/STS diamond turning. In the BSFC system, biaxial servo motions along the z-axis and side-feeding directions are mainly adopted for primary surface generation and RTM control, respectively. Active control of the RTM from the two aspects, namely, undesired effect elimination or effective functionalization, are experimentally demonstrated by fabricating a typical F-theta freeform surface with scattering homogenization and two functional microstructures with imposition of secondary phase gratings integrating both reflective and diffractive functions.
Position feedback system for volume holographic storage media
Hays, Nathan J [San Francisco, CA; Henson, James A [Morgan Hill, CA; Carpenter, Christopher M [Sunnyvale, CA; Akin, Jr William R. [Morgan Hill, CA; Ehrlich, Richard M [Saratoga, CA; Beazley, Lance D [San Jose, CA
1998-07-07
A method of holographic recording in a photorefractive medium wherein stored holograms may be retrieved with maximum signal-to noise ratio (SNR) is disclosed. A plurality of servo blocks containing position feedback information is recorded in the crystal and made non-erasable by heating the crystal. The servo blocks are recorded at specific increments, either angular or frequency, depending whether wavelength or angular multiplexing is applied, and each servo block is defined by one of five patterns. Data pages are then recorded at positions or wavelengths enabling each data page to be subsequently reconstructed with servo patterns which provide position feedback information. The method of recording data pages and servo blocks is consistent with conventional practices. In addition, the recording system also includes components (e.g. voice coil motor) which respond to position feedback information and adjust the angular position of the reference angle of a reference beam to maximize SNR by reducing crosstalk, thereby improving storage capacity.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study
Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.
2010-01-01
Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597
75 FR 68548 - Airworthiness Directives; Airbus Model A318, A319, A320, and A321 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
...: One case of elevator servo-control disconnection has been experienced on an aeroplane of the A320 family. Investigation has revealed that the failure occurred at the servo-control rod eye-end. Further to... servo-control rod eye-ends. In several cases, both actuators of the same elevator surface were affected...
HYDRAULIC SERVO CONTROL MECHANISM
Hussey, R.B.; Gottsche, M.J. Jr.
1963-09-17
A hydraulic servo control mechanism of compact construction and low fluid requirements is described. The mechanism consists of a main hydraulic piston, comprising the drive output, which is connected mechanically for feedback purposes to a servo control piston. A control sleeve having control slots for the system encloses the servo piston, which acts to cover or uncover the slots as a means of controlling the operation of the system. This operation permits only a small amount of fluid to regulate the operation of the mechanism, which, as a result, is compact and relatively light. This mechanism is particuiarly adaptable to the drive and control of control rods in nuclear reactors. (auth)
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William
2009-02-01
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.
NASA Astrophysics Data System (ADS)
Wihardi, Y.; Setiawan, W.; Nugraha, E.
2018-01-01
On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Imaging Stem Cells Implanted in Infarcted Myocardium
Zhou, Rong; Acton, Paul D.; Ferrari, Victor A.
2008-01-01
Stem cell–based cellular cardiomyoplasty represents a promising therapy for myocardial infarction. Noninvasive imaging techniques would allow the evaluation of survival, migration, and differentiation status of implanted stem cells in the same subject over time. This review describes methods for cell visualization using several corresponding noninvasive imaging modalities, including magnetic resonance imaging, positron emission tomography, single-photon emission computed tomography, and bioluminescent imaging. Reporter-based cell visualization is compared with direct cell labeling for short- and long-term cell tracking. PMID:17112999
Strauss, Mario; Lueders, Christian; Strauss, Gero; Stopp, Sebastian; Shi, Jiaxi; Lueth, Tim C
2008-01-01
While removing bone tissue of the mastoid, the facial nerve is at risk of being injured. In this contribution a model for nerve visualization in preoperative image data based on intraoperatively gained EMG signals is proposed. A neuro monitor can assist the surgeon locating and preserving the nerve. With the proposed model gained EMG signals can be spatially related to the patient resp. the image data. During navigation the detected nerve course will be visualized and hence permanently available for assessing the situs.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
The usability of ventilators: a comparative evaluation of use safety and user experience.
Morita, Plinio P; Weinstein, Peter B; Flewwelling, Christopher J; Bañez, Carleene A; Chiu, Tabitha A; Iannuzzi, Mario; Patel, Aastha H; Shier, Ashleigh P; Cafazzo, Joseph A
2016-08-20
The design complexity of critical care ventilators (CCVs) can lead to use errors and patient harm. In this study, we present the results of a comparison of four CCVs from market leaders, using a rigorous methodology for the evaluation of use safety and user experience of medical devices. We carried out a comparative usability study of four CCVs: Hamilton G5, Puritan Bennett 980, Maquet SERVO-U, and Dräger Evita V500. Forty-eight critical care respiratory therapists participated in this fully counterbalanced, repeated measures study. Participants completed seven clinical scenarios composed of 16 tasks on each ventilator. Use safety was measured by percentage of tasks with use errors or close calls (UE/CCs). User experience was measured by system usability and workload metrics, using the Post-Study System Usability Questionnaire (PSSUQ) and the National Aeronautics and Space Administration Task Load Index (NASA-TLX). Nine of 18 post hoc contrasts between pairs of ventilators were significant after Bonferroni correction, with effect sizes between 0.4 and 1.09 (Cohen's d). There were significantly fewer UE/CCs with SERVO-U when compared to G5 (p = 0.044) and V500 (p = 0.020). Participants reported higher system usability for G5 when compared to PB980 (p = 0.035) and higher system usability for SERVO-U when compared to G5 (p < 0.001), PB980 (p < 0.001), and V500 (p < 0.001). Participants reported lower workload for G5 when compared to PB980 (p < 0.001) and lower workload for SERVO-U when compared to PB980 (p < 0.001) and V500 (p < 0.001). G5 scored better on two of nine possible comparisons; SERVO-U scored better on seven of nine possible comparisons. Aspects influencing participants' performance and perception include the low sensitivity of G5's touchscreen and the positive effect from the quality of SERVO-U's user interface design. This study provides empirical evidence of how four ventilators from market leaders compare and highlights the importance of medical technology design. Within the boundaries of this study, we can infer that SERVO-U demonstrated the highest levels of use safety and user experience, followed by G5. Based on qualitative data, differences in outcomes could be explained by interaction design, quality of hardware components used in manufacturing, and influence of consumer product technology on users' expectations.
Assessment of visual landscape quality using IKONOS imagery.
Ozkan, Ulas Yunus
2014-07-01
The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite data in combination with appropriate modelling techniques.
Image simulation and surface reconstruction of undercut features in atomic force microscopy
NASA Astrophysics Data System (ADS)
Qian, Xiaoping; Villarrubia, John; Tian, Fenglei; Dixson, Ronald
2007-03-01
CD-AFMs (critical dimension atomic force microscopes) are instruments with servo-control of the tip in more than one direction. With appropriately "boot-shaped" or flared tips, such instruments can image vertical or even undercut features. As with any AFM, the image is a dilation of the sample shape with the tip shape. Accurate extraction of the CD requires a correction for the tip effect. Analytical methods to correct images for the tip shape have been available for some time for the traditional (vertical feedback only) AFMs, but were until recently unavailable for instruments with multi-dimensional feedback. Dahlen et al. [J. Vac. Sci. Technol. B23, pp. 2297-2303, (2005)] recently introduced a swept-volume approach, implemented for 2-dimensional (2D) feedback. It permits image simulation and sample reconstruction, techniques previously developed for the traditional instruments, to be extended for the newer tools. We have introduced [X. Qian and J. S. Villarrubia, Ultramicroscopy, in press] an alternative dexel-based method, that does the same in either 2D or 3D. This paper describes the application of this method to sample shapes of interest in semiconductor manufacturing. When the tip shape is known (e.g., by prior measurement using a tip characterizer) a 3D sample surface may be reconstructed from its 3D image. Basing the CD measurement upon such a reconstruction is shown here to remove some measurement artifacts that are not removed (or are incompletely removed) by the existing measurement procedures.
Electrical servo actuator bracket. [fuel control valves on jet engines
NASA Technical Reports Server (NTRS)
Sawyer, R. V. (Inventor)
1981-01-01
An electrical servo actuator is mounted on a support arm which is allowed to pivot on a bolt through a fixed mounting bracket. The actuator is pivotally connected to the end of the support arm by a bolt which has an extension allowed to pass through a slot in the fixed mounting bracket. An actuator rod extends from the servo actuator to a crank arm which turns a control shaft. A short linear thrust of the rod pivots the crank arm through about 90 for full-on control with the rod contracted into the servo actuator, and full-off control when the rod is extended from the actuator. A spring moves the servo actuator and actuator rod toward the control crank arm once the actuator rod is fully extended in the full-off position. This assures the turning of the control shaft to a full-off position. A stop bolt and slot are provided to limit pivot motion. Once fully extended, the spring pivots the motion.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1979-01-01
Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.
Direct drive digital servo press with high parallel control
NASA Astrophysics Data System (ADS)
Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi
2013-12-01
Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1975-01-01
Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.
ERIC Educational Resources Information Center
Gross, M. Melissa; Wright, Mary C.; Anderson, Olivia S.
2017-01-01
Research on the benefits of visual learning has relied primarily on lecture-based pedagogy, but the potential benefits of combining active learning strategies with visual and verbal materials on learning anatomy has not yet been explored. In this study, the differential effects of text-based and image-based active learning exercises on examination…
A unified framework for image retrieval using keyword and visual features.
Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo
2005-07-01
In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.
Visualization and recommendation of large image collections toward effective sensemaking
NASA Astrophysics Data System (ADS)
Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis
2016-03-01
In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.
L1 adaptive control of uncertain gear transmission servo systems with deadzone nonlinearity.
Zuo, Zongyu; Li, Xiao; Shi, Zhiguang
2015-09-01
This paper deals with the adaptive control problem of Gear Transmission Servo (GTS) systems in the presence of unknown deadzone nonlinearity and viscous friction. A global differential homeomorphism based on a novel differentiable deadzone model is proposed first. Since there exist both matched and unmatched state-dependent unknown nonlinearities, a full-state feedback L1 adaptive controller is constructed to achieve uniformly bounded transient response in addition to steady-state performance. Finally, simulation results are included to show the elimination of limit cycles, in addition to demonstrating the main results in this paper. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Modelling and Simulation Based on Matlab/Simulink: A Press Mechanism
NASA Astrophysics Data System (ADS)
Halicioglu, R.; Dulger, L. C.; Bozdana, A. T.
2014-03-01
In this study, design and kinematic analysis of a crank-slider mechanism for a crank press is studied. The crank-slider mechanism is the commonly applied one as direct and indirect drive alternatives in practice. Since inexpensiveness, flexibility and controllability are getting more and more important in many industrial applications especially in automotive industry, a crank press with servo actuator (servo crank press) is taken as an application. Design and kinematic analysis of representative mechanism is presented with geometrical analysis for the inverse kinematic of the mechanism by using desired motion concept of slider. The mechanism is modelled in MATLAB/Simulink platform. The simulation results are presented herein.
Inaba, Hajime; Hosaka, Kazumoto; Yasuda, Masami; Nakajima, Yoshiaki; Iwakuni, Kana; Akamatsu, Daisuke; Okubo, Sho; Kohno, Takuya; Onae, Atsushi; Hong, Feng-Lei
2013-04-08
We propose a novel, high-performance, and practical laser source system for optical clocks. The laser linewidth of a fiber-based frequency comb is reduced by phase locking a comb mode to an ultrastable master laser at 1064 nm with a broad servo bandwidth. A slave laser at 578 nm is successively phase locked to a comb mode at 578 nm with a broad servo bandwidth without any pre-stabilization. Laser frequency characteristics such as spectral linewidth and frequency stability are transferred to the 578-nm slave laser from the 1064-nm master laser. Using the slave laser, we have succeeded in observing the clock transition of (171)Yb atoms confined in an optical lattice with a 20-Hz spectral linewidth.
Perceptually lossless fractal image compression
NASA Astrophysics Data System (ADS)
Lin, Huawu; Venetsanopoulos, Anastasios N.
1996-02-01
According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.
Visual Literacy for Libraries: A Practical, Standards-Based Guide
ERIC Educational Resources Information Center
Brown, Nicole E.; Bussert, Kaila; Hattwig, Denise; Medaille, Ann
2016-01-01
The importance of images and visual media in today's culture is changing what it means to be literate in the 21st century. Digital technologies have made it possible for almost anyone to create and share visual media. Yet the pervasiveness of images and visual media does not necessarily mean that individuals are able to critically view, use, and…
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Automatic face recognition in HDR imaging
NASA Astrophysics Data System (ADS)
Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.
2014-05-01
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Fuzzy control of small servo motors
NASA Technical Reports Server (NTRS)
Maor, Ron; Jani, Yashvant
1993-01-01
To explore the benefits of fuzzy logic and understand the differences between the classical control methods and fuzzy control methods, the Togai InfraLogic applications engineering staff developed and implemented a motor control system for small servo motors. The motor assembly for testing the fuzzy and conventional controllers consist of servo motor RA13M and an encoder with a range of 4096 counts. An interface card was designed and fabricated to interface the motor assembly and encoder to an IBM PC. The fuzzy logic based motor controller was developed using the TILShell and Fuzzy C Development System on an IBM PC. A Proportional-Derivative (PD) type conventional controller was also developed and implemented in the IBM PC to compare the performance with the fuzzy controller. Test cases were defined to include step inputs of 90 and 180 degrees rotation, sine and square wave profiles in 5 to 20 hertz frequency range, as well as ramp inputs. In this paper we describe our approach to develop a fuzzy as well as PH controller, provide details of hardware set-up and test cases, and discuss the performance results. In comparison, the fuzzy logic based controller handles the non-linearities of the motor assembly very well and provides excellent control over a broad range of parameters. Fuzzy technology, as indicated by our results, possesses inherent adaptive features.
The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.
Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc
2009-05-06
The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.
Design considerations for a servo optical projection system
NASA Astrophysics Data System (ADS)
Nadalsky, Michael; Allen, Daniel; Bien, Joseph
1987-01-01
The present servooptical projection system (SOPS) furnishes 'out-the-window' scenery for a pilot-training flight simulator; attention is given to the parametric tradeoffs made in the SOPS' optical design, as well as to its mechanical packaging and the servonetwork performance of the unit as integrated into a research/training helicopter flight simulator. The final SOPS configuration is a function of scan head design, assembly modularity, image deterioration method, and focal lengths and relative apertures.
NASA Astrophysics Data System (ADS)
Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.
2006-05-01
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
Creating a classification of image types in the medical literature for visual categorization
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Demner-Fushman, Dina; Antani, Sameer
2012-02-01
Content-based image retrieval (CBIR) from specialized collections has often been proposed for use in such areas as diagnostic aid, clinical decision support, and teaching. The visual retrieval from broad image collections such as teaching files, the medical literature or web images, by contrast, has not yet reached a high maturity level compared to textual information retrieval. Visual image classification into a relatively small number of classes (20-100) on the other hand, has shown to deliver good results in several benchmarks. It is, however, currently underused as a basic technology for retrieval tasks, for example, to limit the search space. Most classification schemes for medical images are focused on specific areas and consider mainly the medical image types (modalities), imaged anatomy, and view, and merge them into a single descriptor or classification hierarchy. Furthermore, they often ignore other important image types such as biological images, statistical figures, flowcharts, and diagrams that frequently occur in the biomedical literature. Most of the current classifications have also been created for radiology images, which are not the only types to be taken into account. With Open Access becoming increasingly widespread particularly in medicine, images from the biomedical literature are more easily available for use. Visual information from these images and knowledge that an image is of a specific type or medical modality could enrich retrieval. This enrichment is hampered by the lack of a commonly agreed image classification scheme. This paper presents a hierarchy for classification of biomedical illustrations with the goal of using it for visual classification and thus as a basis for retrieval. The proposed hierarchy is based on relevant parts of existing terminologies, such as the IRMA-code (Image Retrieval in Medical Applications), ad hoc classifications and hierarchies used in imageCLEF (Image retrieval task at the Cross-Language Evaluation Forum) and NLM's (National Library of Medicine) OpenI. Furtheron, mappings to NLM's MeSH (Medical Subject Headings), RSNA's RadLex (Radiological Society of North America, Radiology Lexicon), and the IRMA code are also attempted for relevant image types. Advantages derived from such hierarchical classification for medical image retrieval are being evaluated through benchmarks such as imageCLEF, and R&D systems such as NLM's OpenI. The goal is to extend this hierarchy progressively and (through adding image types occurring in the biomedical literature) to have a terminology for visual image classification based on image types distinguishable by visual means and occurring in the medical open access literature.
Design Optimization and Testing of an Active Core for Sandwich Panels
2009-07-01
decided to employ servo motors as the actuator in this prototype test rather than using Nitinol spring actuators in the previous report. The servo...motors – although heavier than the Nitinol actuators, have several attractive attributes. Firstly servo motors have excellent respond time given they...are completely electrically actuated, whereas in the case of Nitinol actuators the actuation suffers a lag period for the Joule’s heating to take
NASA Astrophysics Data System (ADS)
Kim, Jin-Hong; Lee, Jun-Seok; Lim, Jungshik; Seo, Jung-Kyo
2009-03-01
Narrow gap distance in cover-layer incident near-field recording (NFR) configuration causes a collision problem in the interface between a solid immersion lens and a disk surface. A polymer cover-layer with smooth surface results in a stable gap servo while a nanocomposite cover-layer with high refractive index shows a collision problem during the gap servo test. Even though a dielectric cover-layer, in which the surface is rougher than the polymer, supplements the mechanical properties, an unclear eye pattern due to an unstable gap servo can be obtained after a chemical mechanical polishing. Not only smooth surface but also good mechanical properties of cover-layer are required for the stable gap servo in the NFR.
A disturbance observer-based adaptive control approach for flexure beam nano manipulators.
Zhang, Yangming; Yan, Peng; Zhang, Zhen
2016-01-01
This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
Extraction of composite visual objects from audiovisual materials
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal
1999-08-01
An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.
Novel AC Servo Rotating and Linear Composite Driving Device for Plastic Forming Equipment
NASA Astrophysics Data System (ADS)
Liang, Jin-Tao; Zhao, Sheng-Dun; Li, Yong-Yi; Zhu, Mu-Zhi
2017-07-01
The existing plastic forming equipment are mostly driven by traditional AC motors with long transmission chains, low efficiency, large size, low precision and poor dynamic response are the common disadvantages. In order to realize high performance forming processes, the driving device should be improved, especially for complicated processing motions. Based on electric servo direct drive technology, a novel AC servo rotating and linear composite driving device is proposed, which features implementing both spindle rotation and feed motion without transmission, so that compact structure and precise control can be achieved. Flux switching topology is employed in the rotating drive component for strong robustness, and fractional slot is employed in the linear direct drive component for large force capability. Then the mechanical structure for compositing rotation and linear motion is designed. A device prototype is manufactured, machining of each component and the whole assembly are presented respectively. Commercial servo amplifiers are utilized to construct the control system of the proposed device. To validate the effectiveness of the proposed composite driving device, experimental study on the dynamic test benches are conducted. The results indicate that the output torque can attain to 420 N·m and the dynamic tracking errors are less than about 0.3 rad in the rotating drive. the dynamic tracking errors are less than about 1.6 mm in the linear feed. The proposed research provides a method to construct high efficiency and accuracy direct driving device in plastic forming equipment.
[Constructing images and territories: thinking on the visuality and materiality of remote sensing].
Monteiro, Marko
2015-01-01
This article offers a reflection on the question of the image in science, thinking about how visual practices contribute towards the construction of knowledge and territories. The growing centrality of the visual in current scientific practices shows the need for reflection that goes beyond the image. The object of discussion will be the scientific images used in the monitoring and visualization of territory. The article looks into the relations between visuality and a number of other factors: the researchers that construct it; the infrastructure involved in the construction; and the institutions and policies that monitor the territory. It is argued that such image-relations do not just visualize but help to construct the territory based on specific forms. Exploring this process makes it possible to develop a more complex understanding of the forms through which sciences and technology help to construct realities.
Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.
Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin
2011-11-01
Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.
NASA Astrophysics Data System (ADS)
Sanghavi, Foram; Agaian, Sos
2017-05-01
The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
Optical-Path-Difference Linear Mechanism for the Panchromatic Fourier Transform Spectrometer
NASA Technical Reports Server (NTRS)
Blavier, Jean-Francois L.; Heverly, Matthew C.; Key, Richard W.; Sander, Stanley P.
2011-01-01
A document discusses a mechanism that uses flex-pivots in a parallelogram arrangement to provide frictionless motion with an unlimited lifetime. A voicecoil actuator drives the parallelogram over the required 5-cm travel. An optical position sensor provides feedback for a servo loop that keeps the velocity within 1 percent of expected value. Residual tip/tilt error is compensated for by a piezo actuator that drives the interferometer mirror. This mechanism builds on previous work that targeted ground-based measurements. The main novelty aspects include cryogenic and vacuum operation, high reliability for spaceflight, compactness of the design, optical layout compatible with the needs of an imaging FTS (i.e. wide overall field-of-view), and mirror optical coatings to cover very broad wavelength range (i.e., 0.26 to 15 m).
Research ethics and the use of visual images in research with people with intellectual disability.
Boxall, Kathy; Ralph, Sue
2009-03-01
The aim of this paper is to encourage debate about the use of creative visual approaches in intellectual disability research and discussion about Internet publication of photographs. Image-based research with people with intellectual disability is explored within the contexts of tighter ethical regulation of social research, increased interest in the use of visual methodologies, and rapid escalation in the numbers of digital images posted on the World Wide Web. Concern is raised about the possibility that tighter ethical regulation of social research, combined with the multitude of ethical issues raised by the use of image-based approaches may be discouraging the use of creative visual approaches in intellectual disability research. Inclusion in research through the use of accessible research methods is also an ethical issue, particularly in relation to those people who have hitherto been underrepresented in research. Visual approaches which have the potential to include people with profound and multiple intellectual disabilities are also discussed.
Visualizing Chemistry with Infrared Imaging
ERIC Educational Resources Information Center
Xie, Charles
2011-01-01
Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…
IUSThrust Vector Control (TVC) servo system
NASA Technical Reports Server (NTRS)
Conner, G. E.
1979-01-01
The IUS TVC SERVO SYSTEM which consists of four electrically redundant electromechanical actuators, four potentiometer assemblies, and two controllers to provide movable nozzle control on both IUS solid rocket motors is developed. An overview of the more severe IUS TVC servo system design requirements, the system and component designs, and test data acquired on a preliminary development unit is presented. Attention is focused on the unique methods of sensing movable nozzle position and providing for redundant position locks.
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
Progress in video immersion using Panospheric imaging
NASA Astrophysics Data System (ADS)
Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.
1998-09-01
Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).
Estimation of optimal pivot point for remote center of motion alignment in surgery.
Rosa, Benoît; Gruijthuijsen, Caspar; Van Cleynenbreugel, Ben; Sloten, Jos Vander; Reynaerts, Dominiek; Poorten, Emmanuel Vander
2015-02-01
The determination of an optimal pivot point ([Formula: see text]) is important for instrument manipulation in minimally invasive surgery. Such knowledge is of particular importance for robotic-assisted surgery where robots need to rotate precisely around a specific point in space in order to minimize trauma to the body wall while maintaining position control. Remote center of motion (RCM) mechanisms are commonly used, where the RCM point is manually and visually aligned. If not positioned appropriately, this misalignment might lead to intolerably high forces on the body wall with increased risk of postoperative complications or instrument damage. An automated method to align the RCM with the [Formula: see text] was developed and tested. Computer vision and a lightweight calibration procedure are used to estimate the optimal pivot point. One or two pre-calibrated cameras viewing the surgical scene are employed. The surgeon is asked to make short pivoting movements, applying as little torque as possible, with an instrument of choice passing through the insertion point while camera images are being recorded. The physical properties of an instrument rotating around a pivot point are exploited in a random sample consensus scheme to robustly estimate the ideal position of the RCM in the image planes. Triangulation is used to estimate the RCM position in 3D. Experiments were performed on a specially designed mockup to test the method. The position of the pivot point is estimated with an average error less than 1.85 mm using two webcams placed from approximately 30 cm to 1 m away from the scene. The entire procedure was completed in a few seconds. In automated method to estimate the ideal position of the RCM was shown to be reliable. The method can be implemented within a visual servoing approach to automatically place the RCM point, or the results can be displayed on a screen to provide guidance to the surgeon. Further work includes the development of an image-guided alignment method and validation with in vivo experiments.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
Infrared image enhancement based on the edge detection and mathematical morphology
NASA Astrophysics Data System (ADS)
Zhang, Linlin; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Yu, Xiaomei; Hui, Mei; Chu, Xuhong; Gong, Cheng
2010-11-01
The development of the un-cooled infrared imaging technology from military necessity. At present, It is widely applied in industrial, medicine, scientific and technological research and so on. The infrared radiation temperature distribution of the measured object's surface can be observed visually. The collection of infrared images from our laboratory has following characteristics: Strong spatial correlation, Low contrast , Poor visual effect; Without color or shadows because of gray image , and has low resolution; Low definition compare to the visible light image; Many kinds of noise are brought by the random disturbances of the external environment. Digital image processing are widely applied in many areas, it can now be studied up close and in detail in many research field. It has become one kind of important means of the human visual continuation. Traditional methods for image enhancement cannot capture the geometric information of images and tend to amplify noise. In order to remove noise and improve visual effect. Meanwhile, To overcome the above enhancement issues. The mathematical model of FPA unit was constructed based on matrix transformation theory. According to characteristics of FPA, Image enhancement algorithm which combined with mathematical morphology and edge detection are established. First of all, Image profile is obtained by using the edge detection combine with mathematical morphological operators. And then, through filling the template profile by original image to get the ideal background image, The image noise can be removed on the base of the above method. The experiments show that utilizing the proposed algorithm can enhance image detail and the signal to noise ratio.
Evaluating Alignment of Shapes by Ensemble Visualization
Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.
2016-01-01
The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768
Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu
2016-01-01
Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
System and method for moving a probe to follow movements of tissue
NASA Technical Reports Server (NTRS)
Feldstein, C.; Andrews, T. W.; Crawford, D. W.; Cole, M. A. (Inventor)
1981-01-01
An apparatus is described for moving a probe that engages moving living tissue such as a heart or an artery that is penetrated by the probe, which moves the probe in synchronism with the tissue to maintain the probe at a constant location with respect to the tissue. The apparatus includes a servo positioner which moves a servo member to maintain a constant distance from a sensed object while applying very little force to the sensed object, and a follower having a stirrup at one end resting on a surface of the living tissue and another end carrying a sensed object adjacent to the servo member. A probe holder has one end mounted on the servo member and another end which holds the probe.
Servo-integrated patterned media by hybrid directed self-assembly.
Xiao, Shuaigang; Yang, Xiaomin; Steiner, Philip; Hsu, Yautzong; Lee, Kim; Wago, Koichi; Kuo, David
2014-11-25
A hybrid directed self-assembly approach is developed to fabricate unprecedented servo-integrated bit-patterned media templates, by combining sphere-forming block copolymers with 5 teradot/in.(2) resolution capability, nanoimprint and optical lithography with overlay control. Nanoimprint generates prepatterns with different dimensions in the data field and servo field, respectively, and optical lithography controls the selective self-assembly process in either field. Two distinct directed self-assembly techniques, low-topography graphoepitaxy and high-topography graphoepitaxy, are elegantly integrated to create bit-patterned templates with flexible embedded servo information. Spinstand magnetic test at 1 teradot/in.(2) shows a low bit error rate of 10(-2.43), indicating fully functioning bit-patterned media and great potential of this approach for fabricating future ultra-high-density magnetic storage media.
Web Image Search Re-ranking with Click-based Similarity and Typicality.
Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi
2016-07-20
In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
Live Cell Visualization of Multiple Protein-Protein Interactions with BiFC Rainbow.
Wang, Sheng; Ding, Miao; Xue, Boxin; Hou, Yingping; Sun, Yujie
2018-05-18
As one of the most powerful tools to visualize PPIs in living cells, bimolecular fluorescence complementation (BiFC) has gained great advancement during recent years, including deep tissue imaging with far-red or near-infrared fluorescent proteins or super-resolution imaging with photochromic fluorescent proteins. However, little progress has been made toward simultaneous detection and visualization of multiple PPIs in the same cell, mainly due to the spectral crosstalk. In this report, we developed novel BiFC assays based on large-Stokes-shift fluorescent proteins (LSS-FPs) to detect and visualize multiple PPIs in living cells. With the large excitation/emission spectral separation, LSS-FPs can be imaged together with normal Stokes shift fluorescent proteins to realize multicolor BiFC imaging using a simple illumination scheme. We also further demonstrated BiFC rainbow combining newly developed BiFC assays with previously established mCerulean/mVenus-based BiFC assays to achieve detection and visualization of four PPI pairs in the same cell. Additionally, we prove that with the complete spectral separation of mT-Sapphire and CyOFP1, LSS-FP-based BiFC assays can be readily combined with intensity-based FRET measurement to detect ternary protein complex formation with minimal spectral crosstalk. Thus, our newly developed LSS-FP-based BiFC assays not only expand the fluorescent protein toolbox available for BiFC but also facilitate the detection and visualization of multiple protein complex interactions in living cells.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Predictive IP controller for robust position control of linear servo system.
Lu, Shaowu; Zhou, Fengxing; Ma, Yajie; Tang, Xiaoqi
2016-07-01
Position control is a typical application of linear servo system. In this paper, to reduce the system overshoot, an integral plus proportional (IP) controller is used in the position control implementation. To further improve the control performance, a gain-tuning IP controller based on a generalized predictive control (GPC) law is proposed. Firstly, to represent the dynamics of the position loop, a second-order linear model is used and its model parameters are estimated on-line by using a recursive least squares method. Secondly, based on the GPC law, an optimal control sequence is obtained by using receding horizon, then directly supplies the IP controller with the corresponding control parameters in the real operations. Finally, simulation and experimental results are presented to show the efficiency of proposed scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shaver, Charles; Williamson, Michael
1986-01-01
The NASA Ames Research Center sponsors a research program for the investigation of Intelligent Flight Control Actuation systems. The use of artificial intelligence techniques in conjunction with algorithmic techniques for autonomous, decentralized fault management of flight-control actuation systems is explored under this program. The design, development, and operation of the interface for laboratory investigation of this program is documented. The interface, architecturally based on the Intel 8751 microcontroller, is an interrupt-driven system designed to receive a digital message from an ultrareliable fault-tolerant control system (UFTCS). The interface links the UFTCS to an electronic servo-control unit, which controls a set of hydraulic actuators. It was necessary to build a UFTCS emulator (also based on the Intel 8751) to provide signal sources for testing the equipment.
Sadeghieh, Ali; Sazgar, Hadi; Goodarzi, Kamyar; Lucas, Caro
2012-01-01
This paper presents a new intelligent approach for adaptive control of a nonlinear dynamic system. A modified version of the brain emotional learning based intelligent controller (BELBIC), a bio-inspired algorithm based upon a computational model of emotional learning which occurs in the amygdala, is utilized for position controlling a real laboratorial rotary electro-hydraulic servo (EHS) system. EHS systems are known to be nonlinear and non-smooth due to many factors such as leakage, friction, hysteresis, null shift, saturation, dead zone, and especially fluid flow expression through the servo valve. The large value of these factors can easily influence the control performance in the presence of a poor design. In this paper, a mathematical model of the EHS system is derived, and then the parameters of the model are identified using the recursive least squares method. In the next step, a BELBIC is designed based on this dynamic model and utilized to control the real laboratorial EHS system. To prove the effectiveness of the modified BELBIC's online learning ability in reducing the overall tracking error, results have been compared to those obtained from an optimal PID controller, an auto-tuned fuzzy PI controller (ATFPIC), and a neural network predictive controller (NNPC) under similar circumstances. The results demonstrate not only excellent improvement in control action, but also less energy consumption. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Research and analysis of head-directed area-of-interest visual system concepts
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1983-01-01
An analysis and survey with conjecture supporting a preliminary data base design is presented. The data base is intended for use in a Computer Image Generator visual subsystem for a rotorcraft flight simulator that is used for rotorcraft systems development, not training. The approach taken was to attempt to identify the visual perception strategies used during terrain flight, survey environmental and image generation factors, and meld these into a preliminary data base design. This design is directed at Data Base developers, and hopefully will stimulate and aid their efforts to evolve such a Base that will support simulation of terrain flight operations.
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.
2011-01-01
Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960
1980-04-01
specifications ... 3-10 25. Typical isolation curve ... 3-12 26. Servo amp/motor/load frequency response (inner gimbal) ... 4-3 27. Slave loop ( open loop...slave loop ( open loop) frequency response (inner gimbal) . . . 4-4 30. Slave loop (closed loop) frequency response (inner gimbal) ... 4-5 3 . Slave...loop inner gimbal time response ... 4-5 32. Servo amp/motor/load frequency response (outer gimbal) ... 4-6 33. Slave loop ( open loop) uncompensated
The application of Halbach cylinders to brushless ac servo motors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atallah, K.; Howe, D.
1998-07-01
Halbach cylinders are applied to brushless ac servo motors. It is shown that a sinusoidal back-emf waveform and a low cogging torque can be achieved without recourse to conventional design features such as distributed windings and/or stator/rotor skew. A technique for imparting a multipole Halbach magnetization distribution on an isotropic permanent magnet cylinder is described, and it is shown that the torque capability of a Halbach ac servo motor can be up to 33% higher than conventional brushless permanent magnet ac motors.
Vision servo of industrial robot: A review
NASA Astrophysics Data System (ADS)
Zhang, Yujin
2018-04-01
Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.
Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination
NASA Technical Reports Server (NTRS)
Zenowich, Brian; Crowell, Adam; Townsend, William T.
2013-01-01
The design of machines that rely on arrays of servomotors such as robotic arms, orbital platforms, and combinations of both, imposes a heavy computational burden to coordinate their actions to perform coherent tasks. For example, the robotic equivalent of a person tracing a straight line in space requires enormously complex kinematics calculations, and complexity increases with the number of servo nodes. A new high-level architecture for coordinated servo-machine control enables a practical, distributed transputer alternative to conventional central processor electronics. The solution is inherently scalable, dramatically reduces bulkiness and number of conductor runs throughout the machine, requires only a fraction of the power, and is designed for cooling in a vacuum.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Tension is servo controlled in film advance system
NASA Technical Reports Server (NTRS)
1965-01-01
Servocontrol device feeds film into a roller system. Two linear potentiometers connected to spring loaded tension rollers furnish servo input signal. Can be used in any continuous material transport system.
Rabbi, Md Shifat-E; Hasan, Md Kamrul
2017-02-01
Strain imaging though for solid lesions provides an effective way for determining their pathologic condition by displaying the tissue stiffness contrast, for fluid filled lesions such an imaging is yet an open problem. In this paper, we propose a novel speckle content based strain imaging technique for visualization and classification of fluid filled lesions in elastography after automatic identification of the presence of fluid filled lesions. Speckle content based strain, defined as a function of speckle density based on the relationship between strain and speckle density, gives an indirect strain value for fluid filled lesions. To measure the speckle density of the fluid filled lesions, two new criteria based on oscillation count of the windowed radio frequency signal and local variance of the normalized B-mode image are used. An improved speckle tracking technique is also proposed for strain imaging of the solid lesions and background. A wavelet-based integration technique is then proposed for combining the strain images from these two techniques for visualizing both the solid and fluid filled lesions from a common framework. The final output of our algorithm is a high quality composite strain image which can effectively visualize both solid and fluid filled breast lesions in addition to the speckle content of the fluid filled lesions for their discrimination. The performance of our algorithm is evaluated using the in vivo patient data and compared with recently reported techniques. The results show that both the solid and fluid filled lesions can be better visualized using our technique and the fluid filled lesions can be classified with good accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
[Image processing system of visual prostheses based on digital signal processor DM642].
Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu
2011-09-01
This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.
MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Supervised pixel classification using a feature space derived from an artificial visual system
NASA Technical Reports Server (NTRS)
Baxter, Lisa C.; Coggins, James M.
1991-01-01
Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.
A knowledge based system for scientific data visualization
NASA Technical Reports Server (NTRS)
Senay, Hikmet; Ignatius, Eve
1992-01-01
A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Image motion compensation on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/
NASA Technical Reports Server (NTRS)
Tarbell, T. D.; Duncan, D. W.; Finch, M. L.; Spence, G.
1981-01-01
The SOUP experiment on Spacelab 2 includes a 30 cm visible light telescope and focal plane package mounted on the Instrument Pointing System (IPS). Scientific goals of the experiment dictate pointing stability requirements of less than 0.05 arcsecond jitter over periods of 5-20 seconds. Quantitative derivations of these requirements from two different aspects are presented: (1) avoidance of motion blurring of diffraction-limited images; (2) precise coalignment of consecutive frames to allow measurement of small image differences. To achieve this stability, a fine guider system capable of removing residual jitter of the IPS and image motions generated on the IPS cruciform instrument support structure has been constructed. This system uses solar limb detectors in the prime focal plane to derive an error signal. Image motion due to pointing errors is compensated by the agile secondary mirror mounted on piezoelectric transducers, controlled by a closed-loop servo system.
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Preterm infant thermal responses to caregiving differ by incubator control mode.
Thomas, Karen A
2003-12-01
To determine the influence of caregiving on preterm infant and incubator temperature and to investigate incubator control mode in thermal responses to caregiving. The intensive within-subject design involved continuous recording of infant and incubator temperature and videotaping throughout a 24-hour period in 40 hospitalized preterm infants. Temperature at care onset was compared with care offset, and 5, 10, 15, and 20 minutes following care offset using ANOVA-RM. Following caregiving, infant and incubator temperature differed significantly over time by incubator control mode. In air servo-control, infant temperature tended to decrease after caregiving, while in skin servo-control infant temperature remained relatively stable. With caregiving, incubator temperature remained consistent in air servo-control and increased in skin servo-control. The temperature effects of caregiving should be considered relative to maintenance of thermoneutrality and unintentional thermal stimulation.
ERIC Educational Resources Information Center
Price, Norman T.
2013-01-01
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active…
The visual communication in the optonometric scales.
Dantas, Rosane Arruda; Pagliuca, Lorita Marlena Freitag
2006-01-01
Communication through vision involves visual apprenticeship that demands ocular integrity, which results in the importance of the evaluation of visual acuity. The scale of images, formed by optotypes, is a method for the verification of visual acuity in kindergarten children. To identify the optotype the child needs to know the image in analysis. Given the importance of visual communication during the process of construction of the scale of images, one presents a bibliographic, analytical study aiming at thinking about the principles for the construction of those tables. One considers the draw inserted as an optotype as a non-verbal symbolic expression of the body and/or of the environment constructed based on the caption of experiences by the individual. One contests the indiscriminate use of images, for one understands that there must be previous knowledge. Despite the subjectivity of the optotypes, the scales continue valid if one adapts images to those of the universe of the children to be examined.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.
Zhu, Xinxin; Jin, Hui; Gao, Cuili; Gui, Rijun; Wang, Zonghua
2017-01-01
In this article, a facile aqueous synthesis of carbon dots (CDs) was developed by using natural kelp as a new carbon source. Through hydrothermal carbonization of kelp juice, fluorescent CDs were prepared and the CDs' surface was modified with polyethylenimine (PEI). The PEI-modified CDs were conjugated with fluorescein isothiocyanate (FITC) to fabricate CDs-FITC composites. To exploit broad applications, the CDs-FITC composites were developed as fluorescent sensing or imaging platforms of pH and Cu 2+ . Analytical performances of the composites-based fluorescence (FL) sensors were evaluated, including visual FL imaging of pH in glass bottle, ratiometric FL sensing of pH in yogurt samples, visual FL latent fingerprint and leaf imaging detection of [Cu 2+ ], dual-signal FL sensing of [Cu 2+ ] in yogurt and human serum samples. Experimental results from ratiometric, visual, dual-signal FL sensing and imaging applications confirmed the high feasibility, accuracy, stabilization and simplicity of CDs-FITC composites-based FL sensors for the detection of pH and Cu 2+ ions in real samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Qiong; Gluch, Jürgen; Krüger, Peter
A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori
2017-10-01
We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
ERIC Educational Resources Information Center
Humphreys, Glyn W.; Wulff, Melanie; Yoon, Eun Young; Riddoch, M. Jane
2010-01-01
Two experiments are reported that use patients with visual extinction to examine how visual attention is influenced by action information in images. In Experiment 1 patients saw images of objects that were either correctly or incorrectly colocated for action, with the objects held by hands that were congruent or incongruent with those used…
Mafrica, Stefano; Servel, Alain; Ruffier, Franck
2016-11-10
Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
[Spatial domain display for interference image dataset].
Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia
2011-11-01
The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.
Visual Images of Subjective Perception of Time in a Literary Text
ERIC Educational Resources Information Center
Nesterik, Ella V.; Issina, Gaukhar I.; Pecherskikh, Taliya F.; Belikova, Oxana V.
2016-01-01
The article is devoted to the subjective perception of time, or psychological time, as a text category and a literary image. It focuses on the visual images that are characteristic of different types of literary time--accelerated, decelerated and frozen (vanished). The research is based on the assumption that the category of subjective perception…
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
1990-01-01
Miniature Crystal Oscillator Second, manufacturability improvements in order to (TMXO), is a very small, very low power , high stability, reduce...gives no velocity selection or dispersion power changes of only 1 dB. results in a very broad velocity spread in the atomic beam and a comparatively high ...vacuum envelope. The to servo such things as microwave power and C-field, which imaging nature of this system provides high selectivity have always been
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Art for reward's sake: visual art recruits the ventral striatum.
Lacey, Simon; Hagtvedt, Henrik; Patrick, Vanessa M; Anderson, Amy; Stilla, Randall; Deshpande, Gopikrishna; Hu, Xiaoping; Sato, João R; Reddy, Srinivas; Sathian, K
2011-03-01
A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non-art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value. Copyright © 2010 Elsevier Inc. All rights reserved.
ART FOR REWARD’S SAKE: VISUAL ART RECRUITS THE VENTRAL STRIATUM
Lacey, Simon; Hagtvedt, Henrik; Patrick, Vanessa M.; Anderson, Amy; Stilla, Randall; Deshpande, Gopikrishna; Hu, Xiaoping; Sato, João R.; Reddy, Srinivas; Sathian, K.
2010-01-01
A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non -art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value. PMID:21111833
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
Wang, Chengwen; Quan, Long; Zhang, Shijie; Meng, Hongjun; Lan, Yuan
2017-03-01
Hydraulic servomechanism is the typical mechanical/hydraulic double-dynamics coupling system with the high stiffness control and mismatched uncertainties input problems, which hinder direct applications of many advanced control approaches in the hydraulic servo fields. In this paper, by introducing the singular value perturbation theory, the original double-dynamics coupling model of the hydraulic servomechanism was reduced to a integral chain system. So that, the popular ADRC (active disturbance rejection control) technology could be directly applied to the reduced system. In addition, the high stiffness control and mismatched uncertainties input problems are avoided. The validity of the simplified model is analyzed and proven theoretically. The standard linear ADRC algorithm is then developed based on the obtained reduced-order model. Extensive comparative co-simulations and experiments are carried out to illustrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
NASA Astrophysics Data System (ADS)
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2016-03-01
We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
Investigation of the low flux servo-controlled limit of a co-phased interferometer
NASA Astrophysics Data System (ADS)
Damé, Luc; Derrien, Marc; Kozlowski, Mathias; Merdjane, Mohamed
2018-04-01
This paper, "Investigation of the low flux servo-controlled limit of a co-phased interferometer," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.
Simulation of proportional control of hydraulic actuator using digital hydraulic valves
NASA Astrophysics Data System (ADS)
Raghuraman, D. R. S.; Senthil Kumar, S.; Kalaiarasan, G.
2017-11-01
Fluid power systems using oil hydraulics in earth moving and construction equipment have been using proportional and servo control valves for a long time to achieve precise and accurate position control backed by system performance. Such valves are having feedback control in them and exhibit good response, sensitivity and fine control of the actuators. Servo valves and proportional valves are possessing less hysteresis when compared to on-off type valves, but when the servo valve spools get stuck in one position, a high frequency called as jitter is employed to bring the spool back, whereas in on-off type valves it requires lesser technology to retract the spool. Hence on-off type valves are used in a technology known as digital valve technology, which caters to precise control on slow moving loads with fast switching times and with good flow and pressure control mimicking the performance of an equivalent “proportional valve” or “servo valve”.
Servo control booster system for minimizing following error
Wise, William L.
1985-01-01
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
The study on servo-control system in the large aperture telescope
NASA Astrophysics Data System (ADS)
Hu, Wei; Zhenchao, Zhang; Daxing, Wang
2008-08-01
Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.
Content-Based Medical Image Retrieval
NASA Astrophysics Data System (ADS)
Müller, Henning; Deserno, Thomas M.
This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
Nonlinear control for a class of hydraulic servo system.
Yu, Hong; Feng, Zheng-jin; Wang, Xu-yong
2004-11-01
The dynamics of hydraulic systems are highly nonlinear and the system may be subjected to non-smooth and discontinuous nonlinearities due to directional change of valve opening, friction, etc. Aside from the nonlinear nature of hydraulic dynamics, hydraulic servo systems also have large extent of model uncertainties. To address these challenging issues, a robust state-feedback controller is designed by employing backstepping design technique such that the system output tracks a given signal arbitrarily well, and all signals in the closed-loop system remain bounded. Moreover, a relevant disturbance attenuation inequality is satisfied by the closed-loop signals. Compared with previously proposed robust controllers, this paper's robust controller based on backstepping recursive design method is easier to design, and is more suitable for implementation.
Goldovsky, David; Jouravsky, Valery; Pe'er, Avi
2016-12-12
We present an approach to locking of optical cavities with piezoelectric actuated mirrors based on a simple and effective mechanical decoupling of the mirror and actuator from the surrounding mount. Using simple elastic materials (e.g. rubber or soft silicone gel pads) as mechanical dampers between the piezo-mirror compound and the surrounding mount, a firm and stable mounting of a relatively large mirror (8mm diameter) can be maintained that is isolated from external mechanical resonances, and is limited only by the internal piezo-mirror resonance of > 330 KHz. Our piezo lock showed positive servo gain up to 208 KHz, and a temporal response to a step interference within < 3 μs.
Adaptive identification of vessel's added moments of inertia with program motion
NASA Astrophysics Data System (ADS)
Alyshev, A. S.; Melnikov, V. G.
2018-05-01
In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.
Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A
2012-09-01
Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.
Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
Delbruck, Tobi; Lang, Manuel
2013-01-01
Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. PMID:24311999
Amini, Amin; Banitsas, Konstantinos; Young, William R
2018-05-23
Parkinson's is a neurodegenerative condition associated with several motor symptoms including tremors and slowness of movement. Freezing of gait (FOG); the sensation of one's feet being "glued" to the floor, is one of the most debilitating symptoms associated with advanced Parkinson's. FOG not only contributes to falls and related injuries, but also compromises quality of life as people often avoid engaging in functional daily activities both inside and outside the home. In the current study, we describe a novel system designed to detect FOG and falling in people with Parkinson's (PwP) as well as monitoring and improving their mobility using laser-based visual cues cast by an automated laser system. The system utilizes a RGB-D sensor based on Microsoft Kinect v2 and a laser casting system consisting of two servo motors and an Arduino microcontroller. This system was evaluated by 15 PwP with FOG. Here, we present details of the system along with a summary of feedback provided by PwP. Despite limitations regarding its outdoor use, feedback was very positive in terms of domestic usability and convenience, where 12/15 PwP showed interest in installing and using the system at their homes. Implications for Rehabilitation Providing an automatic and remotely manageable monitoring system for PwP gait analysis and fall detection. Providing an automatic, unobtrusive and dynamic visual cue system for PwP based on laser line projection. Gathering feedback from PwP about the practical usage of the implemented system through focus group events.
Olseng, Margareth W; Olsen, Brita F; Hetland, Arild; Fagermoen, May S; Jacobsen, Morten
2017-05-01
The aim of this study was to investigate if quality of life improved in chronic heart failure patients with Cheyne-Stokes respiration treated with adaptive servo-ventilation in nurse-led heart failure clinic. Cheyne-Stokes respiration is associated with decreased quality of life in patients with chronic heart failure. Adaptive servo-ventilation is introduced to treat this sleep-disordered breathing. Randomised, controlled design. Fifty-one patients (ranging from 53-84 years), New York Heart Association III-IV and/or left ventricular ejection fraction ≤40% and Cheyne-Stokes respiration were randomised to an intervention group who received adaptive servo-ventilation or a control group. Minnesota Living with Heart Failure Questionnaire was used to assess quality of life at randomisation and after three months. Both groups were followed in the nurse-led heart failure clinic. Adaptive servo ventilation improved quality of life-scores both in a per protocol analysis and in an intention to treat analysis. Twenty-one patients dropped out of the study, nine in the control and 12 in the intervention group. Use of adaptive servo-ventilation improved quality of life in chronic heart failure patients with Cheyne-Stokes respiration. However, the drop-out rate was high. Chronic heart failure patients come regularly to the nurse-led heart failure clinic. The heart failure nurses' competency has to include knowledge of equipment to provide support and continuity of care to the patients. © 2016 John Wiley & Sons Ltd.
The stellar and solar tracking system of the Geneva Observatory gondola
NASA Technical Reports Server (NTRS)
Huguenin, D.
1974-01-01
Sun and star trackers have been added to the latest version of the Geneva Observatory gondola. They perform an image motion compensation with an accuracy of plus or minus 1 minute of arc. The structure is held in the vertical position by gravity; the azimuth is controlled by a torque motor in the suspension bearing using solar or geomagnetic references. The image motion compensation is performed by a flat mirror, located in front of the telescope, controlled by pitch and yaw servo-loops. Offset pointing is possible within the solar disc and in a 3 degree by 3 degree stellar field. A T.V. camera facilitates the star identification and acquisition.
Fuzzy self-learning control for magnetic servo system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.
1994-01-01
It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.
Content dependent selection of image enhancement parameters for mobile displays
NASA Astrophysics Data System (ADS)
Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo
2011-01-01
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Direct visualization of gastrointestinal tract with lanthanide-doped BaYbF5 upconversion nanoprobes.
Liu, Zhen; Ju, Enguo; Liu, Jianhua; Du, Yingda; Li, Zhengqiang; Yuan, Qinghai; Ren, Jinsong; Qu, Xiaogang
2013-10-01
Nanoparticulate contrast agents have attracted a great deal of attention along with the rapid development of modern medicine. Here, a binary contrast agent based on PAA modified BaYbF5:Tm nanoparticles for direct visualization of gastrointestinal (GI) tract has been designed and developed via a one-pot solvothermal route. By taking advantages of excellent colloidal stability, low cytotoxicity, and neglectable hemolysis of these well-designed nanoparticles, their feasibility as a multi-modal contrast agent for GI tract was intensively investigated. Significant enhancement of contrast efficacy relative to clinical barium meal and iodine-based contrast agent was evaluated via X-ray imaging and CT imaging in vivo. By doping Tm(3+) ions into these nanoprobes, in vivo NIR-NIR imaging was then demonstrated. Unlike some invasive imaging modalities, non-invasive imaging strategy including X-ray imaging, CT imaging, and UCL imaging for GI tract could extremely reduce the painlessness to patients, effectively facilitate imaging procedure, as well as rationality economize diagnostic time. Critical to clinical applications, long-term toxicity of our contrast agent was additionally investigated in detail, indicating their overall safety. Based on our results, PAA-BaYbF5:Tm nanoparticles were the excellent multi-modal contrast agent to integrate X-ray imaging, CT imaging, and UCL imaging for direct visualization of GI tract with low systemic toxicity. Copyright © 2013 Elsevier Ltd. All rights reserved.
Analysis of Hydraulic Servo Equations for WRDRF Prototype Control System : Volume I
DOT National Transportation Integrated Search
1971-10-01
A set of dynamic performance equations derived by Wylie Labs., Huntsville, Alabama, were independently rederived and checked. These equations describe the perfromance of the prototype electro hydraulic servo actuator system selected by Wylie as repre...
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.
Fink, Wolfgang; You, Cindy X; Tarbell, Mark A
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Image pattern recognition supporting interactive analysis and graphical visualization
NASA Technical Reports Server (NTRS)
Coggins, James M.
1992-01-01
Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
DOT National Transportation Integrated Search
2012-03-01
Continuous monitoring of subsurface ground movements is accomplished with in-place instruments utilizing automated data acquisition methods. These typically include TDR (Time Domain Reflectometry) or assemblies of several servo-accelerometer-based, e...
Learning visual balance from large-scale datasets of aesthetically highly rated images
NASA Astrophysics Data System (ADS)
Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.
2015-03-01
The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic/semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of "visual rightness", elucidated by Arnheim. We define Arnheim's hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim's hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.
Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho
2015-07-01
The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes. Copyright © 2015 Elsevier Inc. All rights reserved.
Physically-based in silico light sheet microscopy for visualizing fluorescent brain models
2015-01-01
Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404
Denoising imaging polarimetry by adapted BM3D method.
Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R
2018-04-01
In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
A method for real-time visual stimulus selection in the study of cortical object perception
Leeds, Daniel D.; Tarr, Michael J.
2016-01-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168
Active stabilization of a rapidly chirped laser by an optoelectronic digital servo-loop control.
Gorju, G; Jucha, A; Jain, A; Crozatier, V; Lorgeré, I; Le Gouët, J-L; Bretenaker, F; Colice, M
2007-03-01
We propose and demonstrate a novel active stabilization scheme for wide and fast frequency chirps. The system measures the laser instantaneous frequency deviation from a perfectly linear chirp, thanks to a digital phase detection process, and provides an error signal that is used to servo-loop control the chirped laser. This way, the frequency errors affecting a laser scan over 10 GHz on the millisecond timescale are drastically reduced below 100 kHz. This active optoelectronic digital servo-loop control opens new and interesting perspectives in fields where rapidly chirped lasers are crucial.
Niimi, Yoshinari; Murata, Seiichiro; Mitou, Yumi; Ohno, Yusuke
2018-03-01
We developed a novel open cardiopulmonary bypass (CPB) system, a drainage flow servo-controlled CPB system (DS-CPB), in which rotational speed of the main roller pump is servo-controlled to generate the same amount of flow as the systemic venous drainage. It was designed to safely decrease the priming volume while maintaining a constant reservoir level, even during fluctuations of the drainage flow. We report a successful use of a novel DS-CPB system in an elderly Jehovah's Witness patient with dehydration who underwent mitral valve replacement.
Velocity servo for continuous scan Fourier interference spectrometer
NASA Technical Reports Server (NTRS)
Schindler, R. A. (Inventor)
1980-01-01
A velocity servo for continuous scan Fourier interference spectrometer of the double pass retroreflector type having two cat's eye retroreflectors is described. The servo uses an open loop, lead screw drive system for one retroreflector with compensation for any variations in speed of drive of the lead screw provided by sensing any variation in the rate of reference laser fringes, and producing an error signal from such variation used to compensate by energizing a moving coil actuator for the other retroreflector optical path, and energizing (through a highpass filter) piezoelectric actuators for the secondary mirrors of the retroreflectors.
NASA Astrophysics Data System (ADS)
Ma, Zhichao; Hu, Leilei; Zhao, Hongwei; Wu, Boda; Peng, Zhenxing; Zhou, Xiaoqin; Zhang, Hongguo; Zhu, Shuai; Xing, Lifeng; Hu, Huang
2010-08-01
The theories and techniques for improving machining accuracy via position control of diamond tool's tip and raising resolution of cutting depth on precise CNC lathes have been extremely focused on. A new piezo-driven ultra-precision machine tool servo system is designed and tested to improve manufacturing accuracy of workpiece. The mathematical model of machine tool servo system is established and the finite element analysis is carried out on parallel plate flexure hinges. The output position of diamond tool's tip driven by the machine tool servo system is tested via a contact capacitive displacement sensor. Proportional, integral, derivative (PID) feedback is also implemented to accommodate and compensate dynamical change owing cutting forces as well as the inherent non-linearity factors of the piezoelectric stack during cutting process. By closed loop feedback controlling strategy, the tracking error is limited to 0.8 μm. Experimental results have shown the proposed machine tool servo system could provide a tool positioning resolution of 12 nm, which is much accurate than the inherent CNC resolution magnitude. The stepped shaft of aluminum specimen with a step increment of cutting depth of 1 μm is tested, and the obtained contour illustrates the displacement command output from controller is accurately and real-time reflected on the machined part.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Real-time distortion correction for visual inspection systems based on FPGA
NASA Astrophysics Data System (ADS)
Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2008-03-01
Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.
The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less
Cognitive approaches for patterns analysis and security applications
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Ogiela, Lidia
2017-08-01
In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.
A new computerized moving stage for optical microscopes
NASA Astrophysics Data System (ADS)
Hatiboglu, Can Ulas; Akin, Serhat
2004-06-01
Measurements of microscope stage movements in the x and y directions are of importance for some stereological methods. Traditionally, the length of stage movements is measured with differing precision and accuracy using a suitable motorized stage, a microscope and software. Such equipment is generally expensive and not readily available in many laboratories. One other challenging problem is the adaptability to available microscope systems which weakens the possibility of the equipment to be used with any kind of light microscope. This paper describes a simple and cheap programmable moving stage that can be used with the available microscopes in the market. The movements of the stage are controlled by two servo-motors and a controller chip via a Java-based image processing software. With the developed motorized stage and a microscope equipped with a CCD camera, the software allows complete coverage of the specimens with minimum overlap, eliminating the optical strain associated with counting hundreds of images through an eyepiece, in a quick and precise fashion. The uses and the accuracy of the developed stage are demonstrated using thin sections obtained from a limestone core plug.
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Evaluating the Performance of the NASA LaRC CMF Motion Base Safety Devices
NASA Technical Reports Server (NTRS)
Gupton, Lawrence E.; Bryant, Richard B., Jr.; Carrelli, David J.
2006-01-01
This paper describes the initial measured performance results of the previously documented NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base hardware safety devices. These safety systems are required to prevent excessive accelerations that could injure personnel and damage simulator cockpits or the motion base structure. Excessive accelerations may be caused by erroneous commands or hardware failures driving an actuator to the end of its travel at high velocity, stepping a servo valve, or instantly reversing servo direction. Such commands may result from single order failures of electrical or hydraulic components within the control system itself, or from aggressive or improper cueing commands from the host simulation computer. The safety systems must mitigate these high acceleration events while minimizing the negative performance impacts. The system accomplishes this by controlling the rate of change of valve signals to limit excessive commanded accelerations. It also aids hydraulic cushion performance by limiting valve command authority as the actuator approaches its end of travel. The design takes advantage of inherent motion base hydraulic characteristics to implement all safety features using hardware only solutions.
A visual grading study for different administered activity levels in bone scintigraphy.
Gustafsson, Agnetha; Karlsson, Henrik; Nilsson, Kerstin A; Geijer, Håkan; Olsson, Anna
2015-05-01
The aim of the study is to assess the administered activity levels versus visual-based image quality using visual grading regression (VGR) including an assessment of the newly stated image criteria for whole-body bone scintigraphy. A total of 90 patients was included and grouped in three levels of administered activity: 400, 500 and 600 MBq. Six clinical image criteria regarding image quality was formulated by experienced nuclear medicine physicians. Visual grading was performed in all images, where three physicians rated the fulfilment of the image criteria on a four-step ordinal scale. The results were analysed using VGR. A count analysis was also made where the total number of counts in both views was registered. The administered activity of 600 MBq gives significantly better image quality than 400 MBq in five of six criteria (P<0·05). Comparing the administered activity of 600 MBq to 500 MBq, four criteria of six show significantly better image quality (P<0·05). The administered activity of 500 MBq gives no significantly better image quality than 400 Mbq (P<0·05). The count analysis shows that none of the three levels of administrated activity fulfil the recommendations by the EANM. There was a significant improvement in perceived image quality using an activity level of 600 MBq compared to lower activity levels in whole-body bone scintigraphy for the gamma camera equipment end set-up used in this study. This type of visual-based grading study seems to be a valuable tool and easy to implement in the clinical environment. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Hoffmann, M B; Kaule, F; Grzeschik, R; Behrens-Baumann, W; Wolynski, B
2011-07-01
Since its initial introduction in the mid-1990 s, retinotopic mapping of the human visual cortex, based on functional magnetic resonance imaging (fMRI), has contributed greatly to our understanding of the human visual system. Multiple cortical visual field representations have been demonstrated and thus numerous visual areas identified. The organisation of specific areas has been detailed and the impact of pathophysiologies of the visual system on the cortical organisation uncovered. These results are based on investigations at a magnetic field strength of 3 Tesla or less. In a field-strength comparison between 3 and 7 Tesla, it was demonstrated that retinotopic mapping benefits from a magnetic field strength of 7 Tesla. Specifically, the visual areas can be mapped with high spatial resolution for a detailed analysis of the visual field maps. Applications of fMRI-based retinotopic mapping in ophthalmological research hold promise to further our understanding of plasticity in the human visual cortex. This is highlighted by pioneering studies in patients with macular dysfunction or misrouted optic nerves. © Georg Thieme Verlag KG Stuttgart · New York.
Methods of and system for swing damping movement of suspended objects
Jones, J.F.; Petterson, B.J.; Strip, D.R.
1991-03-05
A payload suspended from a gantry is swing damped in accordance with a control algorithm based on the periodic motion of the suspended mass or by servoing on the forces induced by the suspended mass. 13 figures.
Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco
2015-10-15
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.
Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta
2014-07-01
We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong
2013-01-01
Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076
NASA Astrophysics Data System (ADS)
Yao, Xiuya; Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina; Plassard, Andrew; Harrigan, Rob L.; Mawn, Louise A.; Landman, Bennett A.
2017-02-01
Eye diseases and visual impairment affect millions of Americans and induce billions of dollars in annual economic burdens. Expounding upon existing knowledge of eye diseases could lead to improved treatment and disease prevention. This research investigated the relationship between structural metrics of the eye orbit and visual function measurements in a cohort of 470 patients from a retrospective study of ophthalmology records for patients (with thyroid eye disease, orbital inflammation, optic nerve edema, glaucoma, intrinsic optic nerve disease), clinical imaging, and visual function assessments. Orbital magnetic resonance imaging (MRI) and computed tomography (CT) images were retrieved and labeled in 3D using multi-atlas label fusion. Based on the 3D structures, both traditional radiology measures (e.g., Barrett index, volumetric crowding index, optic nerve length) and novel volumetric metrics were computed. Using stepwise regression, the associations between structural metrics and visual field scores (visual acuity, functional acuity, visual field, functional field, and functional vision) were assessed. Across all models, the explained variance was reasonable (R2 0.1-0.2) but highly significant (p < 0.001). Instead of analyzing a specific pathology, this study aimed to analyze data across a variety of pathologies. This approach yielded a general model for the connection between orbital structural imaging biomarkers and visual function.
New method to improve dynamic stiffness of electro-hydraulic servo systems
NASA Astrophysics Data System (ADS)
Bai, Yanhong; Quan, Long
2013-09-01
Most current researches working on improving stiffness focus on the application of control theories. But controller in closed-loop hydraulic control system takes effect only after the controlled position is deviated, so the control action is lagged. Thus dynamic performance against force disturbance and dynamic load stiffness can’t be improved evidently by advanced control algorithms. In this paper, the elementary principle of maintaining piston position unchanged under sudden external force load change by charging additional oil is analyzed. On this basis, the conception of raising dynamic stiffness of electro hydraulic position servo system by flow feedforward compensation is put forward. And a scheme using double servo valves to realize flow feedforward compensation is presented, in which another fast response servo valve is added to the regular electro hydraulic servo system and specially utilized to compensate the compressed oil volume caused by load impact in time. The two valves are arranged in parallel to control the cylinder jointly. Furthermore, the model of flow compensation is derived, by which the product of the amplitude and width of the valve’s pulse command signal can be calculated. And determination rules of the amplitude and width of pulse signal are concluded by analysis and simulations. Using the proposed scheme, simulations and experiments at different positions with different force changes are conducted. The simulation and experimental results show that the system dynamic performance against load force impact is largely improved with decreased maximal dynamic position deviation and shortened settling time. That is, system dynamic load stiffness is evidently raised. This paper proposes a new method which can effectively improve the dynamic stiffness of electro-hydraulic servo systems.
Blind image quality assessment via probabilistic latent semantic analysis.
Yang, Xichen; Sun, Quansen; Wang, Tianshu
2016-01-01
We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.
Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve J.
2007-01-01
Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.
Method of simulation and visualization of FDG metabolism based on VHP image
NASA Astrophysics Data System (ADS)
Cui, Yunfeng; Bai, Jing
2005-04-01
FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.
Vergara, Gaston R; Vijayakumar, Sathya; Kholmovski, Eugene G; Blauer, Joshua J E; Guttman, Mike A; Gloschat, Christopher; Payne, Gene; Vij, Kamal; Akoum, Nazem W; Daccarett, Marcos; McGann, Christopher J; Macleod, Rob S; Marrouche, Nassir F
2011-02-01
Magnetic resonance imaging (MRI) allows visualization of location and extent of radiofrequency (RF) ablation lesion, myocardial scar formation, and real-time (RT) assessment of lesion formation. In this study, we report a novel 3-Tesla RT -RI based porcine RF ablation model and visualization of lesion formation in the atrium during RF energy delivery. The purpose of this study was to develop a 3-Tesla RT MRI-based catheter ablation and lesion visualization system. RF energy was delivered to six pigs under RT MRI guidance. A novel MRI-compatible mapping and ablation catheter was used. Under RT MRI, this catheter was safely guided and positioned within either the left or right atrium. Unipolar and bipolar electrograms were recorded. The catheter tip-tissue interface was visualized with a T1-weighted gradient echo sequence. RF energy was then delivered in a power-controlled fashion. Myocardial changes and lesion formation were visualized with a T2-weighted (T2W) half Fourier acquisition with single-shot turbo spin echo (HASTE) sequence during ablation. RT visualization of lesion formation was achieved in 30% of the ablations performed. In the other cases, either the lesion was formed outside the imaged region (25%) or the lesion was not created (45%) presumably due to poor tissue-catheter tip contact. The presence of lesions was confirmed by late gadolinium enhancement MRI and macroscopic tissue examination. MRI-compatible catheters can be navigated and RF energy safely delivered under 3-Tesla RT MRI guidance. Recording electrograms during RT imaging also is feasible. RT visualization of lesion as it forms during RF energy delivery is possible and was demonstrated using T2W HASTE imaging. Copyright © 2011 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Modeling Image Patches with a Generic Dictionary of Mini-Epitomes
Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.
2015-01-01
The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859
Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.
Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele
2017-12-01
Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P < .05), a significant increase in optic radiation mean diffusivity (F, 7.5; P = .014), and a significant reduction in white matter concentration. VBM analysis also showed a significant reduction in visual cortical volumes (P < .05). Moreover, the chiasmatic area and volume were significantly reduced (P < .05). Conclusion The findings show that visual system alterations can be detected in early stages of Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
Provision of servo-controlled cooling during neonatal transport.
Johnston, Ewen D; Becher, Julie-Clare; Mitchell, Anne P; Stenson, Benjamin J
2012-09-01
Therapeutic hypothermia is a time critical intervention for infants who have experienced a hypoxic-ischaemic event. Previously reported methods of cooling during transport do not demonstrate the same stability achieved in the neonatal unit. The authors developed a system which allowed provision of servo-controlled cooling throughout transport, and present their first year's experience. Retrospective review of routinely collected patient data. 14 out-born infants were referred for cooling during a 12-month period. Nine infants were managed with the servo-controlled system during transport. Cooling was commenced in all infants before 6 h of life. Median time from team arrival to the infant having a temperature in the target range (33-34°C) was 45 min. Median temperature during transfer was 33.5°C (range 33-34°C). Temperature on arrival at the cooling centre ranged from 33.4°C to 33.8°C. Servo-controlled cooling during transport is feasible and provides an optimal level of thermal control.
Improving dynamic performances of PWM-driven servo-pneumatic systems via a novel pneumatic circuit.
Taghizadeh, Mostafa; Ghaffari, Ali; Najafi, Farid
2009-10-01
In this paper, the effect of pneumatic circuit design on the input-output behavior of PWM-driven servo-pneumatic systems is investigated and their control performances are improved using linear controllers instead of complex and costly nonlinear ones. Generally, servo-pneumatic systems are well known for their nonlinear behavior. However, PWM-driven servo-pneumatic systems have the advantage of flexibility in the design of pneumatic circuits which affects the input-output linearity of the whole system. A simple pneumatic circuit with only one fast switching valve is designed which leads to a quasi-linear input-output relation. The quasi-linear behavior of the proposed circuit is verified both experimentally and by simulations. Closed loop position control experiments are then carried out using linear P- and PD-controllers. Since the output position is noisy and cannot be directly differentiated, a Kalman filter is designed to estimate the velocity of the cylinder. Highly improved tracking performances are obtained using these linear controllers, compared to previous works with nonlinear controllers.
Self-Contained Avionics Sensing and Flight Control System for Small Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Ingham, John C. (Inventor); Shams, Qamar A. (Inventor); Logan, Michael J. (Inventor); Fox, Robert L. (Inventor); Fox, legal representative, Melanie L. (Inventor); Kuhn, III, Theodore R. (Inventor); Babel, III, Walter C. (Inventor); Fox, legal representative, Christopher L. (Inventor); Adams, James K. (Inventor); Laughter, Sean A. (Inventor)
2011-01-01
A self-contained avionics sensing and flight control system is provided for an unmanned aerial vehicle (UAV). The system includes sensors for sensing flight control parameters and surveillance parameters, and a Global Positioning System (GPS) receiver. Flight control parameters and location signals are processed to generate flight control signals. A Field Programmable Gate Array (FPGA) is configured to provide a look-up table storing sets of values with each set being associated with a servo mechanism mounted on the UAV and with each value in each set indicating a unique duty cycle for the servo mechanism associated therewith. Each value in each set is further indexed to a bit position indicative of a unique percentage of a maximum duty cycle for the servo mechanism associated therewith. The FPGA is further configured to provide a plurality of pulse width modulation (PWM) generators coupled to the look-up table. Each PWM generator is associated with and adapted to be coupled to one of the servo mechanisms.
Servo control booster system for minimizing following error
Wise, W.L.
1979-07-26
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Velocity control of servo systems using an integral retarded algorithm.
Ramírez, Adrián; Garrido, Rubén; Mondié, Sabine
2015-09-01
This paper presents a design technique for the delay-based controller called Integral Retarded (IR), and its applications to velocity control of servo systems. Using spectral analysis, the technique yields a tuning strategy for the IR by assigning a triple real dominant root for the closed-loop system. This result ultimately guarantees a desired exponential decay rate σ(d) while achieving the IR tuning as explicit function of σ(d) and system parameters. The intentional introduction of delay allows using noisy velocity measurements without additional filtering. The structure of the controller is also able to avoid velocity measurements by using instead position information. The IR is compared to a classical PI, both tested in a laboratory prototype. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Combined photoacoustic and magneto-acoustic imaging.
Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Ma, Li Leo; Johnston, Keith P; Sokolov, Konstantin; Emelianov, Stanislav
2009-01-01
Ultrasound is a widely used modality with excellent spatial resolution, low cost, portability, reliability and safety. In clinical practice and in the biomedical field, molecular ultrasound-based imaging techniques are desired to visualize tissue pathologies, such as cancer. In this paper, we present an advanced imaging technique - combined photoacoustic and magneto-acoustic imaging - capable of visualizing the anatomical, functional and biomechanical properties of tissues or organs. The experiments to test the combined imaging technique were performed using dual, nanoparticle-based contrast agents that exhibit the desired optical and magnetic properties. The results of our study demonstrate the feasibility of the combined photoacoustic and magneto-acoustic imaging that takes the advantages of each imaging techniques and provides high sensitivity, reliable contrast and good penetrating depth. Therefore, the developed imaging technique can be used in wide range of biomedical and clinical application.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
A Regression-Based Family of Measures for Full-Reference Image Quality Assessment
NASA Astrophysics Data System (ADS)
Oszust, Mariusz
2016-12-01
The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.
Advanced Image Processing for Defect Visualization in Infrared Thermography
NASA Technical Reports Server (NTRS)
Plotnikov, Yuri A.; Winfree, William P.
1997-01-01
Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug; Baek, Jung Hwan; Choi, Young Jun; Ha, Eun Ju; Lee, Kang Dae; Lee, Hyoung Shin; Shin, DaeSeock; Kim, Nakyoung
2016-01-01
To develop a semiautomated computer-aided diagnosis (cad) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid cad software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrence matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of cad with visual inspection by expert radiologists based on established gold standards. Most univariate features for this proposed cad system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed cad system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, "axial ratio" and "max probability" in axial images were most frequently included in the optimal feature sets for the authors' proposed cad system, while "shape" and "calcification" in longitudinal images were most frequently included in the optimal feature sets for visual inspection by radiologists. The computed areas under curves in the ROC analysis were 0.986 and 0.979 for the proposed cad system and visual inspection by radiologists, respectively; no significant difference was detected between these groups. The use of thyroid cad to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists. Thyroid cad might be considered a viable way to generate a second opinion for radiologists in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug, E-mail: namkugkim@gmail.com
Purpose: To develop a semiautomated computer-aided diagnosis (CAD) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. Methods: A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid CAD software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrencemore » matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of CAD with visual inspection by expert radiologists based on established gold standards. Results: Most univariate features for this proposed CAD system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed CAD system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, “axial ratio” and “max probability” in axial images were most frequently included in the optimal feature sets for the authors’ proposed CAD system, while “shape” and “calcification” in longitudinal images were most frequently included in the optimal feature sets for visual inspection by radiologists. The computed areas under curves in the ROC analysis were 0.986 and 0.979 for the proposed CAD system and visual inspection by radiologists, respectively; no significant difference was detected between these groups. Conclusions: The use of thyroid CAD to differentiate malignant from benign lesions shows accuracy similar to that obtained via visual inspection by radiologists. Thyroid CAD might be considered a viable way to generate a second opinion for radiologists in clinical practice.« less
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Simple piezoelectric-actuated mirror with 180 kHz servo bandwidth.
Briles, Travis C; Yost, Dylan C; Cingöz, Arman; Ye, Jun; Schibli, Thomas R
2010-05-10
We present a high bandwidth piezoelectric-actuated mirror for length stabilization of an optical cavity. The actuator displays a transfer function with a flat amplitude response and greater than 135 masculine phase margin up to 200 kHz, allowing a 180 kHz unity gain frequency to be achieved in a closed servo loop. To the best of our knowledge, this actuator has achieved the largest servo bandwidth for a piezoelectric transducer (PZT). The actuator should be very useful in a wide variety of applications requiring precision control of optical lengths, including laser frequency stabilization, optical interferometers, and optical communications. (c) 2010 Optical Society of America.
Kuzmina, Margarita; Manykin, Eduard; Surina, Irina
2004-01-01
An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.
Universal and adapted vocabularies for generic visual categorization.
Perronnin, Florent
2008-07-01
Generic Visual Categorization (GVC) is the pattern classification problem which consists in assigning labels to an image based on its semantic content. This is a challenging task as one has to deal with inherent object/scene variations as well as changes in viewpoint, lighting and occlusion. Several state-of-the-art GVC systems use a vocabulary of visual terms to characterize images with a histogram of visual word counts. We propose a novel practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. The main novelty is that an image is characterized by a set of histograms - one per class - where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. This framework is applied to two types of local image features: low-level descriptors such as the popular SIFT and high-level histograms of word co-occurrences in a spatial neighborhood. It is shown experimentally on two challenging datasets (an in-house database of 19 categories and the PASCAL VOC 2006 dataset) that the proposed approach exhibits state-of-the-art performance at a modest computational cost.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kurian, Priya C.; Gopinath, Anish; Shinoy, K. S.; Santhi, P.; Sundaramoorthy, K.; Sebastian, Baby; Jaya, B.; Namboodiripad, M. N.; Mookiah, T.
2017-12-01
Reusable Launch Vehicle-Technology Demonstrator (RLV-TD) is a system which has the ability to carry a payload from the earth's surface to the outer space more than once. The control actuation forms the major component of the control system and it actuates the control surfaces of the RLV-TD based on the control commands. Eight electro hydraulic actuators were used in RLV-TD for vectoring the control surfaces about their axes. A centralised Hydraulic Power Generating Unit (HPU) was used for powering the eight actuators located in two stages. The actuation system had to work for the longest ever duration of about 850 s for an Indian launch vehicle. High bandwidth requirement from autopilot was met by the servo design using the nonlinear mathematical model. Single Control Electronics which drive four electrohydraulic actuators was developed for each stage. High power electronics with soft start scheme was realized for driving the BLDC motor which is the prime mover for hydraulic pump. Many challenges arose due to single HPU for two stages, uncertainty of aero load, higher bandwidth requirements etc. and provisions were incorporated in the design to successfully overcome them. This paper describes the servo design and control electronics architecture of control actuation system.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-06-01
Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.
Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan
2017-01-01
ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305
Light-controlled resistors provide quadrature signal rejection for high-gain servo systems
NASA Technical Reports Server (NTRS)
Mc Cauley, D. D.
1967-01-01
Servo amplifier feedback system, in which the phase sensitive detection, low pass filtering, and multiplication functions required for quadrature rejection, are preformed by light-controlled photoresistors, eliminates complex circuitry. System increases gain, improves signal-to-noise ratio, and eliminates the necessity for compensation.
The Role of Visualization in Learning from Computer-Based Images. Research Report
ERIC Educational Resources Information Center
Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.
2005-01-01
Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and…
Watershed identification of polygonal patterns in noisy SAR images.
Moreels, Pierre; Smrekar, Suzanne E
2003-01-01
This paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
The Ecological Approach to Text Visualization.
ERIC Educational Resources Information Center
Wise, James A.
1999-01-01
Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…
Wang, Chen; Brancusi, Flavia; Valivullah, Zaheer M; Anderson, Michael G; Cunningham, Denise; Hedberg-Buenz, Adam; Power, Bradley; Simeonov, Dimitre; Gahl, William A; Zein, Wadih M; Adams, David R; Brooks, Brian
2018-01-01
To develop a sensitive scale of iris transillumination suitable for clinical and research use, with the capability of either quantitative analysis or visual matching of images. Iris transillumination photographic images were used from 70 study subjects with ocular or oculocutaneous albinism. Subjects represented a broad range of ocular pigmentation. A subset of images was subjected to image analysis and ranking by both expert and nonexpert reviewers. Quantitative ordering of images was compared with ordering by visual inspection. Images were binned to establish an 8-point scale. Ranking consistency was evaluated using the Kendall rank correlation coefficient (Kendall's tau). Visual ranking results were assessed using Kendall's coefficient of concordance (Kendall's W) analysis. There was a high degree of correlation among the image analysis, expert-based and non-expert-based image rankings. Pairwise comparisons of the quantitative ranking with each reviewer generated an average Kendall's tau of 0.83 ± 0.04 (SD). Inter-rater correlation was also high with Kendall's W of 0.96, 0.95, and 0.95 for nonexpert, expert, and all reviewers, respectively. The current standard for assessing iris transillumination is expert assessment of clinical exam findings. We adapted an image-analysis technique to generate quantitative transillumination values. Quantitative ranking was shown to be highly similar to a ranking produced by both expert and nonexpert reviewers. This finding suggests that the image characteristics used to quantify iris transillumination do not require expert interpretation. Inter-rater rankings were also highly similar, suggesting that varied methods of transillumination ranking are robust in terms of producing reproducible results.
Panoramic-image-based rendering solutions for visualizing remote locations via the web
NASA Astrophysics Data System (ADS)
Obeysekare, Upul R.; Egts, David; Bethmann, John
2000-05-01
With advances in panoramic image-based rendering techniques and the rapid expansion of web advertising, new techniques are emerging for visualizing remote locations on the WWW. Success of these techniques depends on how easy and inexpensive it is to develop a new type of web content that provides pseudo 3D visualization at home, 24-hours a day. Furthermore, the acceptance of this new visualization medium depends on the effectiveness of the familiarization tools by a segment of the population that was never exposed to this type of visualization. This paper addresses various hardware and software solutions available to collect, produce, and view panoramic content. While cost and effectiveness of building the content is being addressed using a few commercial hardware solutions, effectiveness of familiarization tools is evaluated using a few sample data sets.
An Integrated Tone Mapping for High Dynamic Range Image Visualization
NASA Astrophysics Data System (ADS)
Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun
2018-01-01
There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.
Enhance wound healing monitoring through a thermal imaging based smartphone app
NASA Astrophysics Data System (ADS)
Yi, Steven; Lu, Minta; Yee, Adam; Harmon, John; Meng, Frank; Hinduja, Saurabh
2018-03-01
In this paper, we present a thermal imaging based app to augment traditional appearance based wound growth monitoring. Accurate diagnose and track of wound healing enables physicians to effectively assess, document, and individualize the treatment plan given to each wound patient. Currently, wounds are primarily examined by physicians through visual appearance and wound area. However, visual information alone cannot present a complete picture on a wound's condition. In this paper, we use a smartphone attached thermal imager and evaluate its effectiveness on augmenting visual appearance based wound diagnosis. Instead of only monitoring wound temperature changes on a wound, our app presents physicians a comprehensive measurements including relative temperature, wound healing thermal index, and wound blood flow. Through the rat wound experiments and by monitoring the integrated thermal measurements over 3 weeks of time frame, our app is able to show the underlying healing process through the blood flow. The implied significance of our app design and experiment includes: (a) It is possible to use a low cost smartphone attached thermal imager for added value on wound assessment, tracking, and treatment; and (b) Thermal mobile app can be used for remote wound healing assessment for mobile health based solution.
NASA Astrophysics Data System (ADS)
Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin
2014-05-01
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features
Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin
2017-01-01
Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353
Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease
NASA Astrophysics Data System (ADS)
Heinzer, Stefan; Müller, Ralph; Stampanoni, Marco; Abela, Rafael; Meyer, Eric P.; Ulmann-Schuler, Alexandra; Krucker, Thomas
2007-03-01
Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.
Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio
2009-12-01
Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
NASA Astrophysics Data System (ADS)
Price, Norman T.
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.
Priou, P; d'Ortho, M-P; Damy, T; Davy, J-M; Gagnadoux, F; Gentina, T; Meurice, J-C; Pepin, J-L; Tamisier, R; Philippe, C
2015-12-01
The preliminary results of the SERVE-HF study have led to the release of safety information with subsequent contraindication to the use of adaptive servo-ventilation (ASV) for the treatment of central sleep apnoeas in patients with chronic symptomatic systolic heart failure with left ventricular ejection fraction (LVEF) ≤ 45%. The aim of this article is to review these results, and to provide more detailed arguments based on data from the literature advocating the continued use of ASV in different indications, including heart failure with preserved LVEF, complex sleep apnoea syndrome, opioid-induced central sleep apnea syndrome, idiopathic central SAS, and central SAS due to a stroke. Based on these findings, we propose to set up registers dedicated to patients in whom ASV has been stopped and in the context of the next setting up of ASV in these specific indications to ensure patient safety and allow reasoned decisions on the use of ASV. Copyright © 2015 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Comprehensive model for predicting perceptual image quality of smart mobile devices.
Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng
2015-01-01
An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.
Research on metallic material defect detection based on bionic sensing of human visual properties
NASA Astrophysics Data System (ADS)
Zhang, Pei Jiang; Cheng, Tao
2018-05-01
Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M Pauline
2007-06-30
The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and productsmore » of visualization.« less
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
NASA Astrophysics Data System (ADS)
Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu
2018-04-01
Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.
NASA Astrophysics Data System (ADS)
Wang, Shuangyi; Housden, James; Singh, Davinder; Rhode, Kawal
2017-12-01
3D trans-oesophageal echocardiography (TOE) has become a powerful tool for monitoring intra-operative catheters used during cardiac procedures in recent years. However, the control of the TOE probe remains as a manual task and therefore the operator has to hold the probe for a long period of time and sometimes in a radiation environment. To solve this problem, an add-on robotic system has been developed for holding and manipulating a commercial TOE probe. This paper focuses on the application of making automatic adjustments to the probe pose in order to accurately monitor the moving catheters. The positioning strategy is divided into an initialization step based on a pre-planning method and a localized adjustments step based on the robotic differential kinematics and related image servoing techniques. Both steps are described in the paper along with simulation experiments performed to validate the concept. The results indicate an error less than 0.5 mm for the initialization step and an error less than 2 mm for the localized adjustments step. Compared to the much bigger live 3D image volume, it is concluded that the methods are promising. Future work will focus on evaluating the method in the real TOE scanning scenario.