Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiinoki, T; Shibuya, K; Sawada, A
Purpose: The new real-time tumor-tracking radiotherapy (RTRT) system was installed in our institution. This system consists of two x-ray tubes and color image intensifiers (I.I.s). The fiducial marker which was implanted near the tumor was tracked using color fluoroscopic images. However, the implantation of the fiducial marker is very invasive. Color fluoroscopic images enable to increase the recognition of the tumor. However, these images were not suitable to track the tumor without fiducial marker. The purpose of this study was to investigate the feasibility of markerless tracking using dual energy colored fluoroscopic images for real-time tumor-tracking radiotherapy system. Methods: Themore » colored fluoroscopic images of static and moving phantom that had the simulated tumor (30 mm diameter sphere) were experimentally acquired using the RTRT system. The programmable respiratory motion phantom was driven using the sinusoidal pattern in cranio-caudal direction (Amplitude: 20 mm, Time: 4 s). The x-ray condition was set to 55 kV, 50 mA and 105 kV, 50 mA for low energy and high energy, respectively. Dual energy images were calculated based on the weighted logarithmic subtraction of high and low energy images of RGB images. The usefulness of dual energy imaging for real-time tracking with an automated template image matching algorithm was investigated. Results: Our proposed dual energy subtraction improve the contrast between tumor and background to suppress the bone structure. For static phantom, our results showed that high tracking accuracy using dual energy subtraction images. For moving phantom, our results showed that good tracking accuracy using dual energy subtraction images. However, tracking accuracy was dependent on tumor position, tumor size and x-ray conditions. Conclusion: We indicated that feasibility of markerless tracking using dual energy fluoroscopic images for real-time tumor-tracking radiotherapy system. Furthermore, it is needed to investigate the tracking accuracy using proposed dual energy subtraction images for clinical cases.« less
Along-Track Reef Imaging System (ATRIS)
Brock, John; Zawada, Dave
2006-01-01
"Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.
Track analysis of laser-illuminated etched track detectors using an opto-digital imaging system
NASA Astrophysics Data System (ADS)
Eghan, Moses J.; Buah-Bassuah, Paul K.; Oppon, Osborne C.
2007-11-01
An opto-digital imaging system for counting and analysing tracks on a LR-115 detector is described. One batch of LR-115 track detectors was irradiated with Am-241 for a determined period and distance for linearity test and another batch was exposed to radon gas. The laser-illuminated etched track detector area was imaged, digitized and analysed by the system. The tracks that were counted on the opto-digital system with the aid of media cybernetics software as well as spark gap counter showed comparable track density results ranging between 1500 and 2750 tracks cm-2 and 65 tracks cm-2 in the two different batch detector samples with 0.5% and 1% track counts, respectively. Track sizes of the incident alpha particles from the radon gas on the LR-115 detector demonstrating different track energies are statistically and graphically represented. The opto-digital imaging system counts and measures other track parameters at an average process time of 3-5 s.
Along-track calibration of SWIR push-broom hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Jemec, Jurij; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran
2016-05-01
Push-broom hyperspectral imaging systems are increasingly used for various medical, agricultural and military purposes. The acquired images contain spectral information in every pixel of the imaged scene collecting additional information about the imaged scene compared to the classical RGB color imaging. Due to the misalignment and imperfections in the optical components comprising the push-broom hyperspectral imaging system, variable spectral and spatial misalignments and blur are present in the acquired images. To capture these distortions, a spatially and spectrally variant response function must be identified at each spatial and spectral position. In this study, we propose a procedure to characterize the variant response function of Short-Wavelength Infrared (SWIR) push-broom hyperspectral imaging systems in the across-track and along-track direction and remove its effect from the acquired images. A custom laser-machined spatial calibration targets are used for the characterization. The spatial and spectral variability of the response function in the across-track and along-track direction is modeled by a parametrized basis function. Finally, the characterization results are used to restore the distorted hyperspectral images in the across-track and along-track direction by a Richardson-Lucy deconvolution-based algorithm. The proposed calibration method in the across-track and along-track direction is thoroughly evaluated on images of targets with well-defined geometric properties. The results suggest that the proposed procedure is well suited for fast and accurate spatial calibration of push-broom hyperspectral imaging systems.
Tracking scanning laser ophthalmoscope (TSLO)
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.
2003-07-01
The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.
Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition
NASA Astrophysics Data System (ADS)
Khayat, Omid; Afarideh, Hossein
2013-04-01
Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.
NASA Astrophysics Data System (ADS)
Manwell, Spencer; Chamberland, Marc J. P.; Klein, Ran; Xu, Tong; deKemp, Robert
2017-03-01
Respiratory gating is a common technique used to compensate for patient breathing motion and decrease the prevalence of image artifacts that can impact diagnoses. In this study a new data-driven respiratory gating method (PeTrack) was compared with a conventional optical tracking system. The performance of respiratory gating of the two systems was evaluated by comparing the number of respiratory triggers, patient breathing intervals and gross heart motion as measured in the respiratory-gated image reconstructions of rubidium-82 cardiac PET scans in test and control groups consisting of 15 and 8 scans, respectively. We found evidence suggesting that PeTrack is a robust patient motion tracking system that can be used to retrospectively assess patient motion in the event of failure of the conventional optical tracking system.
Enhancement of tracking performance in electro-optical system based on servo control algorithm
NASA Astrophysics Data System (ADS)
Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu
2017-10-01
Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.
Marker-less multi-frame motion tracking and compensation in PET-brain imaging
NASA Astrophysics Data System (ADS)
Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.
2015-03-01
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
NASA Astrophysics Data System (ADS)
Krauss, Andreas; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe
2012-04-01
We have previously developed a tumour tracking system, which adapts the aperture of a Siemens 160 MLC to electromagnetically monitored target motion. In this study, we exploit the use of a novel linac-mounted kilovoltage x-ray imaging system for MLC tracking. The unique in-line geometry of the imaging system allows the detection of target motion perpendicular to the treatment beam (i.e. the directions usually featuring steep dose gradients). We utilized the imaging system either alone or in combination with an external surrogate monitoring system. We equipped a Siemens ARTISTE linac with two flat panel detectors, one directly underneath the linac head for motion monitoring and the other underneath the patient couch for geometric tracking accuracy assessments. A programmable phantom with an embedded metal marker reproduced three patient breathing traces. For MLC tracking based on x-ray imaging alone, marker position was detected at a frame rate of 7.1 Hz. For the combined external and internal motion monitoring system, a total of only 85 x-ray images were acquired prior to or in between the delivery of ten segments of an IMRT beam. External motion was monitored with a potentiometer. A correlation model between external and internal motion was established. The real-time component of the MLC tracking procedure then relied solely on the correlation model estimations of internal motion based on the external signal. Geometric tracking accuracies were 0.6 mm (1.1 mm) and 1.8 mm (1.6 mm) in directions perpendicular and parallel to the leaf travel direction for the x-ray-only (the combined external and internal) motion monitoring system in spite of a total system latency of ˜0.62 s (˜0.51 s). Dosimetric accuracy for a highly modulated IMRT beam-assessed through radiographic film dosimetry-improved substantially when tracking was applied, but depended strongly on the respective geometric tracking accuracy. In conclusion, we have for the first time integrated MLC tracking with x-ray imaging in the in-line geometry and demonstrated highly accurate respiratory motion tracking.
Obstacle penetrating dynamic radar imaging system
Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA
2006-12-12
An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, S; Hristov, D; Phillips, T
Purpose: Transperineal ultrasound imaging is attractive option for imageguided radiation therapy as there is no need to implant fiducials, no extra imaging dose, and real time continuous imaging is possible during treatment. The aim of this study is to verify the tracking accuracy of a commercial ultrasound system under treatment conditions with a male pelvic phantom. Methods: A CT and ultrasound scan were acquired for the male pelvic phantom. The phantom was then placed in a treatment mimicking position on a motion platform. The axial and lateral tracking accuracy of the ultrasound system were verified using an independent optical trackingmore » system. The tracking accuracy was evaluated by tracking the phantom position detected by the ultrasound system, and comparing it to the optical tracking system under the conditions of beam on (15 MV), beam off, poor image quality with an acoustic shadow introduced, and different phantom motion cycles (10 and 20 second periods). Additionally, the time lag between the ultrasound-detected and actual phantom motion was investigated. Results: Displacement amplitudes reported by the ultrasound system and optical system were within 0.5 mm of each other for both directions and all conditions. The ultrasound tracking performance in axial direction was better than in lateral direction. Radiation did not interfere with ultrasound tracking while image quality affected tracking accuracy. The tracking accuracy was better for periodic motion with 20 second period. The time delay between the ultrasound tracking system and the phantom motion was clinically acceptable. Conclusion: Intrafractional prostate motion is a potential source of treatment error especially in the context of emerging SBRT regimens. It is feasible to use transperineal ultrasound daily to monitor prostate motion during treatment. Our results verify the tracking accuracy of a commercial ultrasound system to be better than 1 mm under typical external beam treatment conditions.« less
A mitral annulus tracking approach for navigation of off-pump beating heart mitral valve repair.
Li, Feng P; Rajchl, Martin; Moore, John; Peters, Terry M
2015-01-01
To develop and validate a real-time mitral valve annulus (MVA) tracking approach based on biplane transesophageal echocardiogram (TEE) data and magnetic tracking systems (MTS) to be used in minimally invasive off-pump beating heart mitral valve repair (MVR). The authors' guidance system consists of three major components: TEE, magnetic tracking system, and an image guidance software platform. TEE provides real-time intraoperative images to show the cardiac motion and intracardiac surgical tools. The magnetic tracking system tracks the TEE probe and the surgical tools. The software platform integrates the TEE image planes and the virtual model of the tools and the MVA model on the screen. The authors' MVA tracking approach, which aims to update the MVA model in near real-time, comprises of three steps: image based gating, predictive reinitialization, and registration based MVA tracking. The image based gating step uses a small patch centered at each MVA point in the TEE images to identify images at optimal cardiac phases for updating the position of the MVA. The predictive reinitialization step uses the position and orientation of the TEE probe provided by the magnetic tracking system to predict the position of the MVA points in the TEE images and uses them for the initialization of the registration component. The registration based MVA tracking step aims to locate the MVA points in the images selected by the image based gating component by performing image based registration. The validation of the MVA tracking approach was performed in a phantom study and a retrospective study on porcine data. In the phantom study, controlled translations were applied to the phantom and the tracked MVA was compared to its "true" position estimated based on a magnetic sensor attached to the phantom. The MVA tracking accuracy was 1.29 ± 0.58 mm when the translation distance is about 1 cm, and increased to 2.85 ± 1.19 mm when the translation distance is about 3 cm. In the study on porcine data, the authors compared the tracked MVA to a manually segmented MVA. The overall accuracy is 2.37 ± 1.67 mm for single plane images and 2.35 ± 1.55 mm for biplane images. The interoperator variation in manual segmentation was 2.32 ± 1.24 mm for single plane images and 1.73 ± 1.18 mm for biplane images. The computational efficiency of the algorithm on a desktop computer with an Intel(®) Xeon(®) CPU @3.47 GHz and an NVIDIA GeForce 690 graphic card is such that the time required for registering four MVA points was about 60 ms. The authors developed a rapid MVA tracking algorithm for use in the guidance of off-pump beating heart transapical mitral valve repair. This approach uses 2D biplane TEE images and was tested on a dynamic heart phantom and interventional porcine image data. Results regarding the accuracy and efficiency of the authors' MVA tracking algorithm are promising, and fulfill the requirements for surgical navigation.
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)
2010-01-01
A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.
Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography
Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.
2016-01-01
Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800
High resolution imaging of a subsonic projectile using automated mirrors with large aperture
NASA Astrophysics Data System (ADS)
Tateno, Y.; Ishii, M.; Oku, H.
2017-02-01
Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon
2016-06-28
An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets
Bhikha, Charita; Andreasen, Arne; Christensen, Erik I.; Letts, Robyn F. R.; Pantanowitz, Adam; Rubin, David M.; Thomsen, Jesper S.; Zhai, Xiao-Yue
2015-01-01
An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. PMID:26170896
Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.
Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue
2015-01-01
An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.
NASA Astrophysics Data System (ADS)
Cleary, Kevin R.; Banovac, Filip; Levy, Elliot; Tanaka, Daigo
2002-05-01
We have designed and constructed a liver respiratory motion simulator as a first step in demonstrating the feasibility of using a new magnetic tracking system to follow the movement of internal organs. The simulator consists of a dummy torso, a synthetic liver, a linear motion platform, a graphical user interface for image overlay, and a magnetic tracking system along with magnetically tracked instruments. While optical tracking systems are commonly used in commercial image-guided surgery systems for the brain and spine, they are limited to procedures in which a line of sight can be maintained between the tracking system and the instruments which are being tracked. Magnetic tracking systems have been proposed for image-guided surgery applications, but most currently available magnetically tracked sensors are too small to be embedded in the body. The magnetic tracking system employed here, the AURORA from Northern Digital, can use sensors as small as 0.9 mm in diameter by 8 mm in length. This makes it possible to embed these sensors in catheters and thin needles. The catheters can then be wedged in a vein in an internal organ of interest so that tracking the position of the catheter gives a good estimate of the position of the internal organ. Alternatively, a needle with an embedded sensor could be placed near the area of interest.
Video Guidance Sensors Using Remotely Activated Targets
NASA Technical Reports Server (NTRS)
Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
2004-01-01
Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle.
Automatic Contour Tracking in Ultrasound Images
ERIC Educational Resources Information Center
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…
NASA Astrophysics Data System (ADS)
Li, Senhu; Sarment, David
2015-12-01
Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less
Evaluation of a video-based head motion tracking system for dedicated brain PET
NASA Astrophysics Data System (ADS)
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
2015-03-01
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
Reducing Delay in Diagnosis: Multistage Recommendation Tracking.
Wandtke, Ben; Gallagher, Sarah
2017-11-01
The purpose of this study was to determine whether a multistage tracking system could improve communication between health care providers, reducing the risk of delay in diagnosis related to inconsistent communication and tracking of radiology follow-up recommendations. Unconditional recommendations for imaging follow-up of all diagnostic imaging modalities excluding mammography (n = 589) were entered into a database and tracked through a multistage tracking system for 13 months. Tracking interventions were performed for patients for whom completion of recommended follow-up imaging could not be identified 1 month after the recommendation due date. Postintervention compliance with the follow-up recommendation required examination completion or clinical closure (i.e., biopsy, limited life expectancy or death, or subspecialist referral). Baseline radiology information system checks performed 1 month after the recommendation due date revealed timely completion of 43.1% of recommended imaging studies at our institution before intervention. Three separate tracking interventions were studied, showing effectiveness between 29.0% and 57.8%. The multistage tracking system increased the examination completion rate to 70.5% (a 52% increase) and reduced the rate of unknown follow-up compliance and the associated risk of delay in diagnosis to 13.9% (a 74% decrease). Examinations completed after tracking intervention generated revenue of 4.1 times greater than the labor cost. Performing sequential radiology recommendation tracking interventions can substantially reduce the rate of unknown follow-up compliance and add value to the health system. Unknown follow-up compliance is a risk factor for delay in diagnosis, a form of preventable medical error commonly identified in malpractice claims involving radiologists and office-based practitioners.
A restraint-free small animal SPECT imaging system with motion tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisenberger, A.G.; Gleason, S.S.; Goddard, J.
2005-06-01
We report on an approach toward the development of a high-resolution single photon emission computed tomography (SPECT) system to image the biodistribution of radiolabeled tracers such as Tc-99m and I-125 in unrestrained/unanesthetized mice. An infrared (IR)-based position tracking apparatus has been developed and integrated into a SPECT gantry. The tracking system is designed to measure the spatial position of a mouse's head at a rate of 10-15 frames per second with submillimeter accuracy. The high-resolution, gamma imaging detectors are based on pixellated NaI(Tl) crystal scintillator arrays, position-sensitive photomultiplier tubes, and novel readout circuitry requiring fewer analog-digital converter (ADC) channels whilemore » retaining high spatial resolution. Two SPECT gamma camera detector heads based upon position-sensitive photomultiplier tubes have been built and installed onto the gantry. The IR landmark-based pose measurement and tracking system is under development to provide animal position data during a SPECT scan. The animal position and orientation data acquired by the tracking system will be used for motion correction during the tomographic image reconstruction.« less
Research on application of several tracking detectors in APT system
NASA Astrophysics Data System (ADS)
Liu, Zhi
2005-01-01
APT system is the key technology in free space optical communication system, and acquisition and tracking detector is the key component in PAT system. There are several candidate detectors that can be used in PAT system, such as CCD, QAPD and CMOS Imager etc. The characteristics of these detectors are quite different, i.e., the structures and the working schemes. This paper gives thoroughly compare of the usage and working principle of CCD and CMOS imager, and discusses the key parameters like tracking error, noise analyses, power analyses etc. Conclusion is given at the end of this paper that CMOS imager is a good candidate detector for PAT system in free space optical communication system.
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
Active eye-tracking for an adaptive optics scanning laser ophthalmoscope
Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin
2015-01-01
We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370
Adaptation of reference volumes for correlation-based digital holographic particle tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Peinke, Joachim; Gülker, Gerd
2018-04-01
Numerically reconstructed reference volumes tailored to particle images are used for particle position detection by means of three-dimensional correlation. After a first tracking of these positions, the experimentally recorded particle images are retrieved as a posteriori knowledge about the particle images in the system. This knowledge is used for a further refinement of the detected positions. A transparent description of the individual algorithm steps including the results retrieved with experimental data complete the paper. The work employs extraordinarily small particles, smaller than the pixel pitch of the camera sensor. It is the first approach known to the authors that combines numerical knowledge about particle images and particle images retrieved from the experimental system to an iterative particle tracking approach for digital holographic particle tracking velocimetry.
A hemispherical imaging and tracking (HIT) system
NASA Astrophysics Data System (ADS)
Gilbert, John A.; Fair, Sara B.; Caldwell, Scott E.; Gronner, Sally J.
1992-05-01
A hemispherical imaging and tracking (HIT) system is described which is used for an interceptor designed to acquire, select, home, and hit-to-kill reentry vehicle targets from intercontinental ballistic missiles. The system provides a sizable field of view, over which a target may be tracked and yields a unique and distinctive optical signal when the system is 'on target'. The system has an infinite depth of focus and no moving parts are required for imaging within a hemisphere. Critical alignment of the HIT system is based on the comparison of signals captured through different points on an annular window. Assuming that the perturbations are radially symmetric, errors may be eliminated during the subtraction.
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2014 CFR
2014-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2012 CFR
2012-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2013 CFR
2013-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
25 CFR 542.13 - What are the minimum internal control standards for gaming machines?
Code of Federal Regulations, 2011 CFR
2011-04-01
.... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
NASA Astrophysics Data System (ADS)
Coffer, Amy Beth
Radiation imagers are import tools in the modern world for a wide range of applications. They span the use-cases of fundamental sciences, astrophysics, medical imaging, all the way to national security, nuclear safeguards, and non-proliferation verification. The type of radiation imagers studied in this thesis were gamma-ray imagers that detect emissions from radioactive materials. Gamma-ray imagers goal is to localize and map the distribution of radiation within their specific field-of-view despite the fact of complicating background radiation that can be terrestrial, astronomical, and temporal. Compton imaging systems are one type of gamma-ray imager that can map the radiation around the system without the use of collimation. Lack of collimation enables the imaging system to be able to detect radiation from all-directions, while at the same time, enables increased detection efficiency by not absorbing incident radiation in non-sensing materials. Each Compton-scatter events within an imaging system generated a possible cone-surface in space that the radiation could have originated from. Compton imaging is limited in its reconstructed image signal-to-background due to these source Compton-cones overlapping with background radiation Compton-cones. These overlapping cones limit Compton imaging's detection-sensitivity in image space. Electron-tracking Compton imaging (ETCI) can improve the detection-sensitivity by measuring the Compton-scattered electron's initial trajectory. With an estimate of the scattered electron's trajectory, one can reduce the Compton-back-projected cone to a cone-arc, thus enabling faster radiation source detection and localization. However, the ability to measure the Compton-scattered electron-trajectories adds another layer of complexity to an already complex methodology. For a real-world imaging applications, improvements are needed in electron-track detection efficiency and in electron-track reconstruction. One way of measuring Compton-scattered electron-trajectories is with high-resolution Charged-Coupled Devices (CCDs). The proof-of-principle CCD-based ETCI experiment demonstrated the CCDs' ability to measure the Compton-scattered electron-tracks as a 2-dimensional image. Electron-track-imaging algorithms using the electron-track-image are able to determine the 3-dimensional electron-track trajectory within +/- 20 degrees. The work presented here is the physics simulations developed along side the experimental proof-of-principle experiment. The development of accurate physics modeling for multiple-layer CCDs based ETCI systems allow for the accurate prediction of future ETCI system performance. The simulations also enable quick development insights for system design, and they guide the development of electron-track reconstruction methods. The physics simulation efforts for this project looked closely at the accuracy of the Geant4 Monte Carlo methods for medium energy electron transport. In older version of Geant4 there were some discrepancies between the electron-tracking experimental measurements and the simulation results. It was determined that when comparing the electron dynamics of electrons at very high resolutions, Geant4 simulations must be fine tuned with careful choices for physics production cuts and electron physics stepping sizes. One result of this work is a CCDs Monte Carlo model that has been benchmarked to experimental findings and fully characterized for both photon and electron transport. The CCDs physics model now match to within 1 percent error of experimental results for scattered-electron energies below 500 keV. Following the improvements of the CCDs simulations, the performance of a realistic two-layer CCD-stack system was characterized. The realistic CCD-stack system looked at the effect of thin passive-layers on the CCDs' front face and back-contact. The photon interaction efficiency was calculated for the two-layer CCD-stack, and we found that there is a 90 percent probability of scattered-electrons from a 662 keV source to stay within a single active layer. This demonstrates the improved detection efficiency, which is one of the strengths of the CCDs' implementation as a ETCI system. The CCD-stack simulations also established that electron-tracks scattering from one CCDs layer to another could be reconstructed. The passive-regions on the CCD-stack mean that these inter-layer scattered-electron-tracks will always loose both angular information and energy information. Looking at the angular changes of these electrons scattering between the CCDs layers showed us there is not a strong energy dependence on the angular changes due to the passive-regions of the CCDs. The angular changes of the electron track are, for the most part, a function of the thickness of the thin back-layer of the CCDs. Lastly, an approach using CCD-stack simulations was developed to reconstruct the energy transport across dead-layers and its feasibility was demonstrated. Adding back this lost energy will limit the loss of energy resolution of the scatter-interactions. Energy resolution losses would negatively impacted the achievable image resolution from image reconstruction algorithms. Returning some of the energy back to the reconstructed electron-track will help retain the expected performance of the electron-track trajectory determination algorithm.
A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn
2014-07-01
C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin
2007-01-01
The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.
Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki
2005-01-01
Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690
Real Time Target Tracking in a Phantom Using Ultrasonic Imaging
NASA Astrophysics Data System (ADS)
Xiao, X.; Corner, G.; Huang, Z.
In this paper we present a real-time ultrasound image guidance method suitable for tracking the motion of tumors. A 2D ultrasound based motion tracking system was evaluated. A robot was used to control the focused ultrasound and position it at the target that has been segmented from a real-time ultrasound video. Tracking accuracy and precision were investigated using a lesion mimicking phantom. Experiments have been conducted and results show sufficient efficiency of the image guidance algorithm. This work could be developed as the foundation for combining the real time ultrasound imaging tracking and MRI thermometry monitoring non-invasive surgery.
Real-time target tracking and locating system for UAV
NASA Astrophysics Data System (ADS)
Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen
2017-07-01
In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.
A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.
Mung, Jay; Vignon, Francois; Jain, Ameet
2011-01-01
In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-01-01
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. PMID:27898002
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-11-25
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.
3D cloud detection and tracking system for solar forecast using multiple sky imagers
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2015-06-23
We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less
Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.
Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald
2017-12-14
The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.
Desplanques, Maxime; Tagaste, Barbara; Fontana, Giulia; Pella, Andrea; Riboldi, Marco; Fattori, Giovanni; Donno, Andrea; Baroni, Guido; Orecchia, Roberto
2013-01-01
The synergy between in-room imaging and optical tracking, in co-operation with highly accurate robotic patient handling represents a concept for patient-set-up which has been implemented at CNAO (Centro Nazionale di Adroterapia Oncologica). In-room imaging is based on a double oblique X-ray projection system; optical tracking consists of the detection of the position of spherical markers placed directly on the patient's skin or on the immobilization devices. These markers are used as external fiducials during patient positioning and dose delivery. This study reports the results of a comparative analysis between in-room imaging and optical tracking data for patient positioning within the framework of high-precision particle therapy. Differences between the optical tracking system (OTS) and the imaging system (IS) were on average within the expected localization accuracy. On the first 633 fractions for head and neck (H&N) set-up procedures, the corrections applied by the IS, after patient positioning using the OTS only, were for the mostly sub-millimetric regarding the translations (0.4±1.1 mm) and sub-gradual regarding the rotations (0.0°±0.8°). On the first 236 fractions for pelvis localizations the amplitude of the corrections applied by the IS after preliminary optical set-up correction were moderately higher and more dispersed (translations: 1.3±2.9 mm, rotations 0.1±0.9°). Although the indication of the OTS cannot replace information provided by in-room imaging devices and 2D-3D image registration, the reported data show that OTS preliminary correction might greatly support image-based patient set-up refinement and also provide a secondary, independent verification system for patient positioning. PMID:23824116
Tracking of Cells with a Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously
Tracking of cells with a compact microscope imaging system with intelligent controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to auto-focus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
NASA Astrophysics Data System (ADS)
Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan
2017-01-01
This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.
Anser EMT: the first open-source electromagnetic tracking platform for image-guided interventions.
Jaeger, Herman Alexander; Franz, Alfred Michael; O'Donoghue, Kilian; Seitel, Alexander; Trauzettel, Fabian; Maier-Hein, Lena; Cantillon-Murphy, Pádraig
2017-06-01
Electromagnetic tracking is the gold standard for instrument tracking and navigation in the clinical setting without line of sight. Whilst clinical platforms exist for interventional bronchoscopy and neurosurgical navigation, the limited flexibility and high costs of electromagnetic tracking (EMT) systems for research investigations mitigate against a better understanding of the technology's characterisation and limitations. The Anser project provides an open-source implementation for EMT with particular application to image-guided interventions. This work provides implementation schematics for our previously reported EMT system which relies on low-cost acquisition and demodulation techniques using both National Instruments and Arduino hardware alongside MATLAB support code. The system performance is objectively compared to other commercial tracking platforms using the Hummel assessment protocol. Positional accuracy of 1.14 mm and angular rotation accuracy of [Formula: see text] are reported. Like other EMT platforms, Anser is susceptible to tracking errors due to eddy current and ferromagnetic distortion. The system is compatible with commercially available EMT sensors as well as the Open Network Interface for image-guided therapy (OpenIGTLink) for easy communication with visualisation and medical imaging toolkits such as MITK and 3D Slicer. By providing an open-source platform for research investigations, we believe that novel and collaborative approaches can overcome the limitations of current EMT technology.
An ice-motion tracking system at the Alaska SAR facility
NASA Technical Reports Server (NTRS)
Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross
1990-01-01
An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.
Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.
Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael
2016-11-01
To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.
TASLIMAGE System #2 Technical Equivalence Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Topper, J. D.; Stone, D. K.
In early 2017, a second TASLIMAGE system (TASL 2) was procured from Track Analysis Systems, Ltd. The new device is intended to complement the first system (TASL 1) and to provide redundancy to the original system which was acquired in 2009. The new system functions primarily the same as the earlier system, though with different X-Y stage hardware and a USB link from the camera to the host computer, both of which contribute to a reduction in CR-39 foil imaging time. The camera and image analysis software are identical between the two systems. Neutron dose calculations are performed externally andmore » independent of the imaging system used to collect track data, relying only on the measured recoil proton track density per cm 2 for a set of known-dose CR-39 foils processed in each etch.« less
Franck, J.V.; Broadhead, P.S.; Skiff, E.W.
1959-07-14
A semiautomatic measuring projector particularly adapted for measurement of the coordinates of photographic images of particle tracks as prcduced in a bubble or cloud chamber is presented. A viewing screen aids the operator in selecting a particle track for measurement. After approximate manual alignment, an image scanning system coupled to a servo control provides automatic exact alignment of a track image with a reference point. The apparatus can follow along a track with a continuous motion while recording coordinate data at various selected points along the track. The coordinate data is recorded on punched cards for subsequent computer calculation of particle trajectory, momentum, etc.
Application of TrackEye in equine locomotion research.
Drevemo, S; Roepstorff, L; Kallings, P; Johnston, C J
1993-01-01
TrackEye is an analysis system, which is applicable for equine biokinematic studies. It covers the whole process from digitizing of images, automatic target tracking and analysis. Key components in the system are an image work station for processing of video images and a high-resolution film-to-video scanner for 16-mm film. A recording module controls the input device and handles the capture of image sequences into a videodisc system, and a tracking module is able to follow reference markers automatically. The system offers a flexible analysis including calculations of markers displacements, distances and joint angles, velocities and accelerations. TrackEye was used to study effects of phenylbutazone on the fetlock and carpal joint angle movements in a horse with a mild lameness caused by osteo-arthritis in the fetlock joint of a forelimb. Significant differences, most evident before treatment, were observed in the minimum fetlock and carpal joint angles when contralateral limbs were compared (p < 0.001). The minimum fetlock angle and the minimum carpal joint angle were significantly greater in the lame limb before treatment compared to those 6, 37 and 49 h after the last treatment (p < 0.001).
An MRI-Compatible Robotic System With Hybrid Tracking for MRI-Guided Prostate Intervention
Krieger, Axel; Iordachita, Iulian I.; Guion, Peter; Singh, Anurag K.; Kaushal, Aradhana; Ménard, Cynthia; Pinto, Peter A.; Camphausen, Kevin; Fichtinger, Gabor
2012-01-01
This paper reports the development, evaluation, and first clinical trials of the access to the prostate tissue (APT) II system—a scanner independent system for magnetic resonance imaging (MRI)-guided transrectal prostate interventions. The system utilizes novel manipulator mechanics employing a steerable needle channel and a novel six degree-of-freedom hybrid tracking method, comprising passive fiducial tracking for initial registration and subsequent incremental motion measurements. Targeting accuracy of the system in prostate phantom experiments and two clinical human-subject procedures is shown to compare favorably with existing systems using passive and active tracking methods. The portable design of the APT II system, using only standard MRI image sequences and minimal custom scanner interfacing, allows the system to be easily used on different MRI scanners. PMID:22009867
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Track-Before-Declare Methods in IR Image Sequences
1992-09-01
processing methods of this type, known as track- before-declare (TBD), and sometimes by the misleading term track - before - detect , have been employed in systems...Electronic Systems, Vol. AES-il, No. 6. November 1975. 8. A. Corbeil, J. DiDomizio, Track - Before - Detect Development and Demonstration Program, Phase
Automatic respiration tracking for radiotherapy using optical 3D camera
NASA Astrophysics Data System (ADS)
Li, Tuotuo; Geng, Jason; Li, Shidong
2013-03-01
Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New images can be accurately represented as weighted summation of those eigen-vectors, which can be easily discriminated with a trained classifier. We developed algorithms, software and integrated with an O3D imaging system to perform the respiration tracking automatically. The resulting respiration tracking system requires no human intervene during it tracking operation. Experimental results show that our approach to respiration tracking is more accurate and robust than the methods using manual selected markers, even in the presence of incomplete imaging data.
Usability of a real-time tracked augmented reality display system in musculoskeletal injections
NASA Astrophysics Data System (ADS)
Baum, Zachary; Ungi, Tamas; Lasso, Andras; Fichtinger, Gabor
2017-03-01
PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.
Geometric reconstruction using tracked ultrasound strain imaging
NASA Astrophysics Data System (ADS)
Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.
2013-03-01
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
NASA Astrophysics Data System (ADS)
Nafis, Christopher; Jensen, Vern; von Jako, Ron
2008-03-01
Electromagnetic (EM) tracking systems have been successfully used for Surgical Navigation in ENT, cranial, and spine applications for several years. Catheter sized micro EM sensors have also been used in tightly controlled cardiac mapping and pulmonary applications. EM systems have the benefit over optical navigation systems of not requiring a line-of-sight between devices. Ferrous metals or conductive materials that are transient within the EM working volume may impact tracking performance. Effective methods for detecting and reporting EM field distortions are generally well known. Distortion compensation can be achieved for objects that have a static spatial relationship to a tracking sensor. New commercially available micro EM tracking systems offer opportunities for expanded image-guided navigation procedures. It is important to know and understand how well these systems perform with different surgical tables and ancillary equipment. By their design and intended use, micro EM sensors will be located at the distal tip of tracked devices and therefore be in closer proximity to the tables. Our goal was to define a simple and portable process that could be used to estimate the EM tracker accuracy, and to vet a large number of popular general surgery and imaging tables that are used in the United States and abroad.
Fluoroscopic image-guided intervention system for transbronchial localization
NASA Astrophysics Data System (ADS)
Rai, Lav; Keast, Thomas M.; Wibowo, Henky; Yu, Kun-Chang; Draper, Jeffrey W.; Gibbs, Jason D.
2012-02-01
Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps) cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography (CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video, while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Development of three-dimensional tracking system using astigmatic lens method for microscopes
NASA Astrophysics Data System (ADS)
Kibata, Hiroki; Ishii, Katsuhiro
2017-07-01
We have developed a three-dimensional tracking system for microscopes. Using the astigmatic lens method and a CMOS image sensor, we realize a rapid detection of a target position in a wide range. We demonstrate a target tracking using the developed system.
RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.
AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Chen, Tao
2018-06-01
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA
2011-12-20
A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.
NASA Astrophysics Data System (ADS)
Liu, Brent; Lee, Jasper; Documet, Jorge; Guo, Bing; King, Nelson; Huang, H. K.
2006-03-01
By implementing a tracking and verification system, clinical facilities can effectively monitor workflow and heighten information security in today's growing demand towards digital imaging informatics. This paper presents the technical design and implementation experiences encountered during the development of a Location Tracking and Verification System (LTVS) for a clinical environment. LTVS integrates facial biometrics with wireless tracking so that administrators can manage and monitor patient and staff through a web-based application. Implementation challenges fall into three main areas: 1) Development and Integration, 2) Calibration and Optimization of Wi-Fi Tracking System, and 3) Clinical Implementation. An initial prototype LTVS has been implemented within USC's Healthcare Consultation Center II Outpatient Facility, which currently has a fully digital imaging department environment with integrated HIS/RIS/PACS/VR (Voice Recognition).
3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.
Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert
2011-01-01
A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2010-01-01
With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
A real-time tracking system of infrared dim and small target based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun
2014-11-01
A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
Litzenberg, Dale W; Gallagher, Ian; Masi, Kathryn J; Lee, Choonik; Prisciandaro, Joann I; Hamstra, Daniel A; Ritter, Timothy; Lam, Kwok L
2013-08-01
To present and characterize a measurement technique to quantify the calibration accuracy of an electromagnetic tracking system to radiation isocenter. This technique was developed as a quality assurance method for electromagnetic tracking systems used in a multi-institutional clinical hypofractionated prostate study. In this technique, the electromagnetic tracking system is calibrated to isocenter with the manufacturers recommended technique, using laser-based alignment. A test patient is created with a transponder at isocenter whose position is measured electromagnetically. Four portal images of the transponder are taken with collimator rotations of 45° 135°, 225°, and 315°, at each of four gantry angles (0°, 90°, 180°, 270°) using a 3×6 cm2 radiation field. In each image, the center of the copper-wrapped iron core of the transponder is determined. All measurements are made relative to this transponder position to remove gantry and imager sag effects. For each of the 16 images, the 50% collimation edges are identified and used to find a ray representing the rotational axis of each collimation edge. The 16 collimator rotation rays from four gantry angles pass through and bound the radiation isocenter volume. The center of the bounded region, relative to the transponder, is calculated and then transformed to tracking system coordinates using the transponder position, allowing the tracking system's calibration offset from radiation isocenter to be found. All image analysis and calculations are automated with inhouse software for user-independent accuracy. Three different tracking systems at two different sites were evaluated for this study. The magnitude of the calibration offset was always less than the manufacturer's stated accuracy of 0.2 cm using their standard clinical calibration procedure, and ranged from 0.014 to 0.175 cm. On three systems in clinical use, the magnitude of the offset was found to be 0.053±0.036, 0.121±0.023, and 0.093±0.013 cm. The method presented here provides an independent technique to verify the calibration of an electromagnetic tracking system to radiation isocenter. The calibration accuracy of the system was better than the 0.2 cm accuracy stated by the manufacturer. However, it should not be assumed to be zero, especially for stereotactic radiation therapy treatments where planning target volume margins are very small.
Infrared small target tracking based on SOPC
NASA Astrophysics Data System (ADS)
Hu, Taotao; Fan, Xiang; Zhang, Yu-Jin; Cheng, Zheng-dong; Zhu, Bin
2011-01-01
The paper presents a low cost FPGA based solution for a real-time infrared small target tracking system. A specialized architecture is presented based on a soft RISC processor capable of running kernel based mean shift tracking algorithm. Mean shift tracking algorithm is realized in NIOS II soft-core with SOPC (System on a Programmable Chip) technology. Though mean shift algorithm is widely used for target tracking, the original mean shift algorithm can not be directly used for infrared small target tracking. As infrared small target only has intensity information, so an improved mean shift algorithm is presented in this paper. How to describe target will determine whether target can be tracked by mean shift algorithm. Because color target can be tracked well by mean shift algorithm, imitating color image expression, spatial component and temporal component are advanced to describe target, which forms pseudo-color image. In order to improve the processing speed parallel technology and pipeline technology are taken. Two RAM are taken to stored images separately by ping-pong technology. A FLASH is used to store mass temp data. The experimental results show that infrared small target is tracked stably in complicated background.
NASA Astrophysics Data System (ADS)
Yang, Xiaochen; Clements, Logan W.; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C.; Dawant, Benoit M.; Miga, Michael I.
2017-03-01
Intra-operative soft tissue deformation, referred to as brain shift, compromises the application of current imageguided surgery (IGS) navigation systems in neurosurgery. A computational model driven by sparse data has been used as a cost effective method to compensate for cortical surface and volumetric displacements. Stereoscopic microscopes and laser range scanners (LRS) are the two most investigated sparse intra-operative imaging modalities for driving these systems. However, integrating these devices in the clinical workflow to facilitate development and evaluation requires developing systems that easily permit data acquisition and processing. In this work we present a mock environment developed to acquire stereo images from a tracked operating microscope and to reconstruct 3D point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space in order to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. Our experimental results report approximately 2mm average displacement error compared with the optical tracking system. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to LRS to collect sufficient intraoperative information for brain shift correction.
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Lin, Melissa; Kim, Younsu; Paredes, Mateo; Kannan, Karun; Patel, Nisu; Moghekar, Abhay; Durr, Nicholas J.; Boctor, Emad M.
2017-03-01
Lumbar punctures (LPs) are interventional procedures used to collect cerebrospinal fluid (CSF), a bodily fluid needed to diagnose central nervous system disorders. Most lumbar punctures are performed blindly without imaging guidance. Because the target window is small, physicians can only accurately palpate the appropriate space about 30% of the time and perform a successful procedure after an average of three attempts. Although various forms of imaging based guidance systems have been developed to aid in this procedure, these systems complicate the procedure by including independent image modalities and requiring image-to-needle registration to guide the needle insertion. Here, we propose a simple and direct needle insertion platform utilizing a single ultrasound element within the needle through dynamic sensing and imaging. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle such as bone, but also visually locate structures by combining transducer location tracking and back projection based tracked synthetic aperture beam-forming algorithm. The concept of the system was validated through simulation first, which revealed the tolerance to realistic error. Then, the initial prototype of the single element transducer was built into a 14G needle, and was mounted on a holster equipped with a rotation tracking encoder. We experimentally evaluated the system using a metal wire phantom mimicking high reflection bone structures and an actual spine bone phantom with both the controlled motion and freehand scanning. An ultrasound image corresponding to the model phantom structure was reconstructed using the beam-forming algorithm, and the resolution was improved compared to without beam-forming. These results demonstrated the proposed system has the potential to be used as an ultrasound imaging system for lumbar puncture procedures.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.
Single nanoparticle tracking spectroscopic microscope
Yang, Haw [Moraga, CA; Cang, Hu [Berkeley, CA; Xu, Cangshan [Berkeley, CA; Wong, Chung M [San Gabriel, CA
2011-07-19
A system that can maintain and track the position of a single nanoparticle in three dimensions for a prolonged period has been disclosed. The system allows for continuously imaging the particle to observe any interactions it may have. The system also enables the acquisition of real-time sequential spectroscopic information from the particle. The apparatus holds great promise in performing single molecule spectroscopy and imaging on a non-stationary target.
An automatic analyzer of solid state nuclear track detectors using an optic RAM as image sensor
NASA Astrophysics Data System (ADS)
Staderini, Enrico Maria; Castellano, Alfredo
1986-02-01
An optic RAM is a conventional digital random access read/write dynamic memory device featuring a quartz windowed package and memory cells regularly ordered on the chip. Such a device is used as an image sensor because each cell retains data stored in it for a time depending on the intensity of the light incident on the cell itself. The authors have developed a system which uses an optic RAM to acquire and digitize images from electrochemically etched CR39 solid state nuclear track detectors (SSNTD) in the track count rate up to 5000 cm -2. On the digital image so obtained, a microprocessor, with appropriate software, performs image analysis, filtering, tracks counting and evaluation.
NASA Astrophysics Data System (ADS)
Clements, Logan W.; Collins, Jarrod A.; Wu, Yifei; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2015-03-01
Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.
Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.
Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru
2013-01-01
Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
NASA Astrophysics Data System (ADS)
Ehrhorn, B.; Azari, D.
Low Earth Orbit (LEO) and Orbital Debris tracking have become considerably important with regard to Space Situational Awareness (SSA). This paper discusses the capabilities of autonomous LEO and Orbital Debris Tracking Systems using commercially available (mid aperture 20-24 inch) telescopes, tracking gimbals, and CCD imagers. RC Optical Systems has been developing autonomous satellite trackers that allow for unattended acquisition, imaging, and orbital determination of LEOs using low cost COTS equipment. The test setup from which we are gathering data consists of an RC Optical Systems Professional Series Elevation over Azimuth Gimbal with field de-rotation, RC Optical Systems 20 inch Ritchey-Chretien Telescope coupled to an e2v CCD42-40 CCD array, and 77mm f/4 tracking lens coupled to a KAF-0402ME CCD array. Central to success of LEO acquisition and open loop tracking is accurate modeling of Gimbal and telescope misalignments and flexures. Using pro-TPoint and a simple automated mapping routine we have modeled our primary telescope to achieve pointing and tracking accuracies within a population standard deviation of 1.3 arc-sec (which is 1.1 arc-sec RMS). Once modeled, a mobile system can easily and quickly be calibrated to the sky using a simple 6-10 star map to solve for axis tilt and collimation coefficients. Acquisition of LEO satellites is accomplished through the use of a wide field imager. Using a 77mm f/4 lens and 765 x 510 x 9mu CCD array yields a 1.28 x 0.85 degree field of view in our test setup. Accurate boresite within the acquisition array is maintained throughout the full range of motion through differential tpoint modeling of the main and acquisition imagers. Satellite identification is accomplished by detecting a stationary centroid as a point source and differentiating from the background of streaked stars in a single frame. We found 100% detection rate of LEO with radar cross sections (RCS) of > 0.5 meter*meter within the acquisition array, and approximately 90% within 0.25 degrees of center. Tests of open loop tracking revealed a vast majority of satellites remain within the main detector area of 0.19 x 0.19 degrees after initial centering. Once acquired, the satellite is centered within the main imager via automated adjustment of the epoch and inclination using non-linear least square fit. Thereafter, real time satellite position is sequentially determined and recorded using the main imaging array. Real time determination of the SGP4 Keplerian elements are solved using non-linear least squares regression. The tracking propagator is periodically updated to reflect the solved Keplerian elements in order to maintain the satellite position near image center. These processes are accomplished without the need for user intervention. Unattended fully autonomous LEO satellite tracking and orbital determination simply requires scheduling of appropriate targets and scripted command of the tracking system.
NASA Astrophysics Data System (ADS)
Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki
2015-05-01
The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3 ± 1.0 and 3.3 ± 3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.
Untangling cell tracks: Quantifying cell migration by time lapse image data analysis.
Svensson, Carl-Magnus; Medyukhina, Anna; Belyaev, Ivan; Al-Zaben, Naim; Figge, Marc Thilo
2018-03-01
Automated microscopy has given researchers access to great amounts of live cell imaging data from in vitro and in vivo experiments. Much focus has been put on extracting cell tracks from such data using a plethora of segmentation and tracking algorithms, but further analysis is normally required to draw biologically relevant conclusions. Such relevant conclusions may be whether the migration is directed or not, whether the population has homogeneous or heterogeneous migration patterns. This review focuses on the analysis of cell migration data that are extracted from time lapse images. We discuss a range of measures and models used to analyze cell tracks independent of the biological system or the way the tracks were obtained. For single-cell migration, we focus on measures and models giving examples of biological systems where they have been applied, for example, migration of bacteria, fibroblasts, and immune cells. For collective migration, we describe the model systems wound healing, neural crest migration, and Drosophila gastrulation and discuss methods for cell migration within these systems. We also discuss the role of the extracellular matrix and subsequent differences between track analysis in vitro and in vivo. Besides methods and measures, we are putting special focus on the need for openly available data and code, as well as a lack of common vocabulary in cell track analysis. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Electromagnetic tracking for abdominal interventions in computer aided surgery
Zhang, Hui; Banovac, Filip; Lin, Ralph; Glossop, Neil; Wood, Bradford J.; Lindisch, David; Levy, Elliot; Cleary, Kevin
2014-01-01
Electromagnetic tracking has great potential for assisting physicians in precision placement of instruments during minimally invasive interventions in the abdomen, since electromagnetic tracking is not limited by the line-of-sight restrictions of optical tracking. A new generation of electromagnetic tracking has recently become available, with sensors small enough to be included in the tips of instruments. To fully exploit the potential of this technology, our research group has been developing a computer aided, image-guided system that uses electromagnetic tracking for visualization of the internal anatomy during abdominal interventions. As registration is a critical component in developing an accurate image-guided system, we present three registration techniques: 1) enhanced paired-point registration (time-stamp match registration and dynamic registration); 2) orientation-based registration; and 3) needle shape-based registration. Respiration compensation is another important issue, particularly in the abdomen, where respiratory motion can make precise targeting difficult. To address this problem, we propose reference tracking and affine transformation methods. Finally, we present our prototype navigation system, which integrates the registration, segmentation, path-planning and navigation functions to provide real-time image guidance in the clinical environment. The methods presented here have been tested with a respiratory phantom specially designed by our group and in swine animal studies under approved protocols. Based on these tests, we conclude that our system can provide quick and accurate localization of tracked instruments in abdominal interventions, and that it offers a user friendly display for the physician. PMID:16829506
NASA Astrophysics Data System (ADS)
Han, Bin
This dissertation describes a research project to test the clinical utility of a time-resolved proton radiographic (TRPR) imaging system by performing comprehensive Monte Carlo simulations of a physical device coupled with realistic lung cancer patient anatomy defined by 4DCT for proton therapy. A time-resolved proton radiographic imaging system was modeled through Monte Carlo simulations. A particle-tracking feature was employed to evaluate the performance of the proton imaging system, especially in its ability to visualize and quantify proton range variations during respiration. The Most Likely Path (MLP) algorithm was developed to approximate the multiple Coulomb scattering paths of protons for the purpose of image reconstruction. Spatial resolution of ˜ 1 mm and range resolution of 1.3% of the total range were achieved using the MLP algorithm. Time-resolved proton radiographs of five patient cases were reconstructed to track tumor motion and to calculate water equivalent length variations. By comparing with direct 4DCT measurement, the accuracy of tumor tracking was found to be better than 2 mm in five patient cases. Utilizing tumor tracking information to reduce margins to the planning target volume, a gated treatment plan was compared with un-gated treatment plan. The equivalent uniform dose (EUD) and the normal tissue complication probability (NTCP) were used to quantify the gain in the quality of treatments. The EUD of the OARs was found to be reduced up to 11% and the corresponding NTCP of organs at risk (OARs) was found to be reduced up to 16.5%. These results suggest that, with image guidance by proton radiography, dose to OARs can be reduced and the corresponding NTCPs can be significantly reduced. The study concludes that the proton imaging system can accurately track the motion of the tumor and detect the WEL variations, leading to potential gains in using image-guided proton radiography for lung cancer treatments.
Image-based systems for space surveillance: from images to collision avoidance
NASA Astrophysics Data System (ADS)
Pyanet, Marine; Martin, Bernard; Fau, Nicolas; Vial, Sophie; Chalte, Chantal; Beraud, Pascal; Fuss, Philippe; Le Goff, Roland
2011-11-01
In many spatial systems, image is a core technology to fulfil the mission requirements. Depending on the application, the needs and the constraints are different and imaging systems can offer a large variety of configurations in terms of wavelength, resolution, field-of-view, focal length or sensitivity. Adequate image processing algorithms allow the extraction of the needed information and the interpretation of images. As a prime contractor for many major civil or military projects, Astrium ST is very involved in the proposition, development and realization of new image-based techniques and systems for space-related purposes. Among the different applications, space surveillance is a major stake for the future of space transportation. Indeed, studies show that the number of debris in orbit is exponentially growing and the already existing population of small and medium debris is a concrete threat to operational satellites. This paper presents Astrium ST activities regarding space surveillance for space situational awareness (SSA) and space traffic management (STM). Among other possible SSA architectures, the relevance of a ground-based optical station network is investigated. The objective is to detect and track space debris and maintain an exhaustive and accurate catalogue up-to-date in order to assess collision risk for satellites and space vehicles. The system is composed of different type of optical stations dedicated to specific functions (survey, passive tracking, active tracking), distributed around the globe. To support these investigations, two in-house operational breadboards were implemented and are operated for survey and tracking purposes. This paper focuses on Astrium ST end-to-end optical-based survey concept. For the detection of new debris, a network of wide field of view survey stations is considered: those stations are able to detect small objects and associated image processing (detection and tracking) allow a preliminary restitution of their orbit.
Development of Automated Tracking System with Active Cameras for Figure Skating
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.
Design and preliminary accuracy studies of an MRI-guided transrectal prostate intervention system.
Krieger, Axel; Csoma, Csaba; Iordachital, Iulian I; Guion, Peter; Singh, Anurag K; Fichtinger, Gabor; Whitcomb, Louis L
2007-01-01
This paper reports a novel system for magnetic resonance imaging (MRI) guided transrectal prostate interventions, such as needle biopsy, fiducial marker placement, and therapy delivery. The system utilizes a hybrid tracking method, comprised of passive fiducial tracking for initial registration and subsequent incremental motion measurement along the degrees of freedom using fiber-optical encoders and mechanical scales. Targeting accuracy of the system is evaluated in prostate phantom experiments. Achieved targeting accuracy and procedure times were found to compare favorably with existing systems using passive and active tracking methods. Moreover, the portable design of the system using only standard MRI image sequences and minimal custom scanner interfacing allows the system to be easily used on different MRI scanners.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Liu, Joseph; Zhang, Xuejun; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2016-03-01
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system needs to quantify lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. In order to perform lesion registration, we have developed a brain warping and normalizing methodology using Statistical Parametric Mapping (SPM) MATLAB toolkit for brain MRI. Patients' brain MR images are processed via SPM's normalization processes, and the brain images are analyzed and warped according to the tissue probability map. Lesion identification and contouring are completed by neuroradiologists, and lesion volume quantification is completed by the eFolder's CAD program. Lesion comparison results in longitudinal studies show key growth and active regions. The results display successful lesion registration and tracking over a longitudinal study. Lesion change results are graphically represented in the web-based user interface, and users are able to correlate patient progress and changes in the MRI images. The completed lesion and disease tracking tool would enable the eFolder to provide complete patient profiles, improve the efficiency of patient care, and perform comprehensive data analysis through an integrated imaging informatics system.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
Real-time image processing for particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Kreizer, Mark; Ratner, David; Liberzon, Alex
2010-01-01
We present a novel high-speed particle tracking velocimetry (PTV) experimental system. Its novelty is due to the FPGA-based, real-time image processing "on camera". Instead of an image, the camera transfers to the computer using a network card, only the relevant information of the identified flow tracers. Therefore, the system is ideal for the remote particle tracking systems in research and industrial applications, while the camera can be controlled and data can be transferred over any high-bandwidth network. We present the hardware and the open source software aspects of the PTV experiments. The tracking results of the new experimental system has been compared to the flow visualization and particle image velocimetry measurements. The canonical flow in the central cross section of a a cubic cavity (1:1:1 aspect ratio) in our lid-driven cavity apparatus is used for validation purposes. The downstream secondary eddy (DSE) is the sensitive portion of this flow and its size was measured with increasing Reynolds number (via increasing belt velocity). The size of DSE estimated from the flow visualization, PIV and compressed PTV is shown to agree within the experimental uncertainty of the methods applied.
Mobile Aerial Tracking and Imaging System (MATRIS) for Aeronautical Research
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Blanchard, R. C.; Miller, G. M.
2004-01-01
A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles (RLVs) as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of RLV configurations. During that study NASA teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility (MDA/ISTEF) to test techniques and analysis on two Space Shuttle flights.
Comparison of three optical tracking systems in a complex navigation scenario.
Rudolph, Tobias; Ebert, Lars; Kowal, Jens
2010-01-01
Three-dimensional rotational X-ray imaging with the SIREMOBIL Iso-C3D (Siemens AG, Medical Solutions, Erlangen, Germany) has become a well-established intra-operative imaging modality. In combination with a tracking system, the Iso-C3D provides inherently registered image volumes ready for direct navigation. This is achieved by means of a pre-calibration procedure. The aim of this study was to investigate the influence of the tracking system used on the overall navigation accuracy of direct Iso-C3D navigation. Three models of tracking system were used in the study: Two Optotrak 3020s, a Polaris P4 and a Polaris Spectra system, with both Polaris systems being in the passive operation mode. The evaluation was carried out at two different sites using two Iso-C3D devices. To measure the navigation accuracy, a number of phantom experiments were conducted using an acrylic phantom equipped with titanium spheres. After scanning, a special pointer was used to pinpoint these markers. The difference between the digitized and navigated positions served as the accuracy measure. Up to 20 phantom scans were performed for each tracking system. The average accuracy measured was 0.86 mm and 0.96 mm for the two Optotrak 3020 systems, 1.15 mm for the Polaris P4, and 1.04 mm for the Polaris Spectra system. For the Polaris systems a higher maximal error was found, but all three systems yielded similar minimal errors. On average, all tracking systems used in this study could deliver similar navigation accuracy. The passive Polaris system showed – as expected – higher maximal errors; however, depending on the application constraints, this might be negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E
2016-06-15
Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
NASA Astrophysics Data System (ADS)
Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.
2012-04-01
PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.
Multi-object tracking of human spermatozoa
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen
2008-03-01
We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.
Real-time Awake Animal Motion Tracking System for SPECT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon
Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less
Adaptive optics optical coherence tomography with dynamic retinal tracking
Kocaoglu, Omer P.; Ferguson, R. Daniel; Jonnal, Ravi S.; Liu, Zhuolin; Wang, Qiang; Hammer, Daniel X.; Miller, Donald T.
2014-01-01
Adaptive optics optical coherence tomography (AO-OCT) is a highly sensitive and noninvasive method for three dimensional imaging of the microscopic retina. Like all in vivo retinal imaging techniques, however, it suffers the effects of involuntary eye movements that occur even under normal fixation. In this study we investigated dynamic retinal tracking to measure and correct eye motion at KHz rates for AO-OCT imaging. A customized retina tracking module was integrated into the sample arm of the 2nd-generation Indiana AO-OCT system and images were acquired on three subjects. Analyses were developed based on temporal amplitude and spatial power spectra in conjunction with strip-wise registration to independently measure AO-OCT tracking performance. After optimization of the tracker parameters, the system was found to correct eye movements up to 100 Hz and reduce residual motion to 10 µm root mean square. Between session precision was 33 µm. Performance was limited by tracker-generated noise at high temporal frequencies. PMID:25071963
Submarine Combat Systems Engineering Project Capstone Project
2011-06-06
sonar , imaging, Electronic Surveillance (ES) and communications. These sensors passively detect contacts, which emit... passive sensors is included. A Search Detect Identify Track Decide Engage Assess 3 contact can be sensed by the system as either surface or... Detect Track Avoid Search Detect Identify Track Search Engage Assess Detect Track Avoid Search • SONAR •Imagery •TC • SONAR • SONAR •EW •Imagery •ESM
3D Visual Tracking of an Articulated Robot in Precision Automated Tasks
Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.
2017-01-01
The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860
Visual perception system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)
2012-01-01
A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.
Time course of pH change in plant epidermis using microscopic pH imaging system
NASA Astrophysics Data System (ADS)
Dan, Risako; Shimizu, Megumi; Kazama, Haruko; Sakaue, Hirotaka
2010-11-01
We established a microscopic pH imaging system to track the time course of pH change in plant epidermis in vivo. In the previous research, we have found out that anthocyanin containing cells have higher pH. However, it was not clear whether the anthocyanin increased the pH or anthocyanin was synthesized result from the higher pH. Therefore, we further investigated the relationship between anthocyanin and pH change. To track the time course of pH change in plant epidermis, we established a system using luminescent imaging technique. We used HPTS (8-Hydroxypyrene-1,3,6-Trisulfonate) as pH indicator and applied excitation ratio imaging method. Luminescent image was converted to a pH distribution by obtained in vitro calibration using known pH solution. Cellular level observation was enabled by merging microscopic color picture of the same region to the pH change image. The established system was applied to epidermal cells of red-tip leaf lettuce, Lactuca Sativa L. and the time course was tracked in the growth process. We would discuss about the relationship between anthocyanin and pH change in plant epidermis.
Miyamoto, Naoki; Ishikawa, Masayori; Sutherland, Kenneth; Suzuki, Ryusuke; Matsuura, Taeko; Toramatsu, Chie; Takao, Seishin; Nihongi, Hideaki; Shimizu, Shinichi; Umegaki, Kikuo; Shirato, Hiroki
2015-01-01
In the real-time tumor-tracking radiotherapy system, a surrogate fiducial marker inserted in or near the tumor is detected by fluoroscopy to realize respiratory-gated radiotherapy. The imaging dose caused by fluoroscopy should be minimized. In this work, an image processing technique is proposed for tracing a moving marker in low-dose imaging. The proposed tracking technique is a combination of a motion-compensated recursive filter and template pattern matching. The proposed image filter can reduce motion artifacts resulting from the recursive process based on the determination of the region of interest for the next frame according to the current marker position in the fluoroscopic images. The effectiveness of the proposed technique and the expected clinical benefit were examined by phantom experimental studies with actual tumor trajectories generated from clinical patient data. It was demonstrated that the marker motion could be traced in low-dose imaging by applying the proposed algorithm with acceptable registration error and high pattern recognition score in all trajectories, although some trajectories were not able to be tracked with the conventional spatial filters or without image filters. The positional accuracy is expected to be kept within ±2 mm. The total computation time required to determine the marker position is a few milliseconds. The proposed image processing technique is applicable for imaging dose reduction. PMID:25129556
TRIAC II. A MatLab code for track measurements from SSNT detectors
NASA Astrophysics Data System (ADS)
Patiris, D. L.; Blekas, K.; Ioannides, K. G.
2007-08-01
A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent track. After the chemical etching of the detectors in aqueous NaOH or KOH solutions, latent tracks can be sufficiently enlarged (with diameters of 1 μm or more) to become visible under an optical microscope. Using the appropriate apparatus, one can record images of the SSNTD's surface. The shapes of the particle's tracks are strongly dependent on their charge, energy and the angle of incidence. Generally, they have elliptical shapes and in the special case of vertical incidence, they are circular. The manual counting of tracks is a tedious and time-consuming task. An automatic system is needed to speed up the process and to increase the accuracy of the results. Method of solution: TRIAC II is based on a segmentation method that groups image pixels according to their intensity value (brightness) in a number of grey level groups. After the segmentation of pixels, the program recognizes and separates the track from the background, subsequently performing image morphology, where oversized objects or objects smaller than a threshold value are removed. Finally, using the appropriate Hough transform technique, the program counts the tracks, even those which overlap and classifies them according to their shape parameters and brightness. Typical running time: The analysis of an image with a PC (Intel Pentium III processor running at 600 MHz) requires 2 to 10 minutes, depending on the number of observed tracks and the digital resolution of the image. Unusual features of the program: This program has been tested with images of CR-39 detectors exposed to alpha particles. Also, in low contrast images with few or small tracks, background pixels can be recognized as track pixels. To avoid this problem the brightness of the background pixels should be sufficiently higher than that of the track pixels.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-01-01
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Miss-distance indicator for tank main gun systems
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan A.; Hillis, David B.
1994-07-01
The initial development of a passive, automated system to track bullet trajectories near a target to determine the `miss distance,' and the corresponding correction necessary to bring the following round `on target' is discussed. The system consists of a visible wavelength CCD sensor, long focal length optics, and a separate IR sensor to detect the muzzle flash of the firing event; this is coupled to a `PC' based image processing and automatic tracking system designed to follow the projectile trajectory by intelligently comparing frame to frame variation of the projectile tracer image. An error analysis indicates that the device is particularly sensitive to variation of the projectile time of flight to the target, and requires development of algorithms to estimate this value from the 2D images employed by the sensor to monitor the projectile trajectory. Initial results obtained by using a brassboard prototype to track training ammunition are promising.
Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal
2009-02-01
The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.
Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H
2012-06-01
In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Wu, Hai-ying; Zhang, San-xi; Liu, Biao; Yue, Peng; Weng, Ying-hui
2018-02-01
The photoelectric theodolite is an important scheme to realize the tracking, detection, quantitative measurement and performance evaluation of weapon systems in ordnance test range. With the improvement of stability requirements for target tracking in complex environment, infrared scene simulation with high sense of reality and complex interference has become an indispensable technical way to evaluate the track performance of photoelectric theodolite. And the tail flame is the most important infrared radiation source of the weapon system. The dynamic tail flame with high reality is a key element for the photoelectric theodolite infrared scene simulation and imaging tracking test. In this paper, an infrared simulation method for the full-path tracking of tail flame by photoelectric theodolite is proposed aiming at the faint boundary, irregular, multi-regulated points. In this work, real tail images are employed. Simultaneously, infrared texture conversion technology is used to generate DDS texture for a particle system map. Thus, dynamic real-time tail flame simulation results with high fidelity from the theodolite perspective can be gained in the tracking process.
An optical tracking system for virtual reality
NASA Astrophysics Data System (ADS)
Hrimech, Hamid; Merienne, Frederic
2009-03-01
In this paper we present a low-cost 3D tracking system which we have developed and tested in order to move away from traditional 2D interaction techniques (keyboard and mouse) in an attempt to improve user's experience while using a CVE. Such a tracking system is used to implement 3D interaction techniques that augment user experience, promote user's sense of transportation in the virtual world as well as user's awareness of their partners. The tracking system is a passive optical tracking system using stereoscopy a technique allowing the reconstruction of three-dimensional information from a couple of images. We have currently deployed our 3D tracking system on a collaborative research platform for investigating 3D interaction techniques in CVEs.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Catheter tracking in an interventional photoacoustic surgical system
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Itsarachaiyot, Yuttana; Kim, Younsu; Zhang, Haichong K.; Taylor, Russell H.; Boctor, Emad M.
2017-03-01
In laparoscopic medical procedures, accurate tracking of interventional tools such as catheters are necessary. Current practice for tracking catheters often involve using fluoroscopy, which is best avoided to minimize radiation dose to the patient and the surgical team. Photoacoustic imaging is an emerging imaging modality that can be used for this purpose and does not currently have a general tool tracking solution. Photoacoustic-based catheter tracking would increase its attractiveness, by providing both an imaging and tracking solution. We present a catheter tracking method based on the photoacoustic effect. Photoacoustic markers are simultaneously observed by a stereo camera as well as a piezoelectric element attached to the tip of a catheter. The signals received by the piezoelectric element can be used to compute its position relative to the photoacoustic markers using multilateration. This combined information can be processed to localize the position of the piezoelectric element with respect to the stereo camera system. We presented the methods to enable this work and demonstrated precisions of 1-3mm and a relative accuracy of less than 4% in four independent locations, which are comparable to conventional systems. In addition, we also showed in another experiment a reconstruction precision up to 0.4mm and an estimated accuracy up to 0.5mm. Future work will include simulations to better evaluate this method and its challenges and the development of concurrent photoacoustic marker projection and its associated methods.
Mobile Aerial Tracking and Imaging System (MATrIS) for Aeronautical Research
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Blanchard, Robert C.; Miller, Geoffrey M.
2004-01-01
A mobile, rapidly deployable ground-based system to track and image targets of aeronautical interest has been developed. Targets include reentering reusable launch vehicles as well as atmospheric and transatmospheric vehicles. The optics were designed to image targets in the visible and infrared wavelengths. To minimize acquisition cost and development time, the system uses commercially available hardware and software where possible. The conception and initial funding of this system originated with a study of ground-based imaging of global aerothermal characteristics of reusable launch vehicle configurations. During that study the National Aeronautics and Space Administration teamed with the Missile Defense Agency/Innovative Science and Technology Experimentation Facility to test techniques and analysis on two Space Shuttle flights.
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
NASA Astrophysics Data System (ADS)
Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui
2017-02-01
An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.
Design and implementation of a PC-based image-guided surgical system.
Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L
2002-11-01
In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.
Neutron Radiography of Fluid Flow for Geothermal Energy Research
NASA Astrophysics Data System (ADS)
Bingham, P.; Polsky, Y.; Anovitz, L.; Carmichael, J.; Bilheux, H.; Jacobsen, D.; Hussey, D.
Enhanced geothermal systems seek to expand the potential for geothermal energy by engineering heat exchange systems within the earth. A neutron radiography imaging method has been developed for the study of fluid flow through rock under environmental conditions found in enhanced geothermal energy systems. For this method, a pressure vessel suitable for neutron radiography was designed and fabricated, modifications to imaging instrument setups were tested, multiple contrast agents were tested, and algorithms developed for tracking of flow. The method has shown success for tracking of single phase flow through a manufactured crack in a 3.81 cm (1.5 inch) diameter core within a pressure vessel capable of confinement up to 69 MPa (10,000 psi) using a particle tracking approach with bubbles of fluorocarbon-based fluid as the ;particles; and imaging with 10 ms exposures.
Tracker: Image-Processing and Object-Tracking System Developed
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Theodore W.
1999-01-01
Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.
Application of 3-D imaging sensor for tracking minipigs in the open field test.
Kulikov, Victor A; Khotskin, Nikita V; Nikitin, Sergey V; Lankin, Vasily S; Kulikov, Alexander V; Trapezov, Oleg V
2014-09-30
The minipig is a promising model in neurobiology and psychopharmacology. However, automated tracking of minipig behavior is still unresolved problem. The study was carried out on white, agouti and black (or spotted) minipiglets (n=108) bred in the Institute of Cytology and Genetics. New method of automated tracking of minipig behavior is based on Microsoft Kinect 3-D image sensor and the 3-D image reconstruction with EthoStudio software. The algorithms of distance run and time in the center evaluation were adapted for 3-D image data and new algorithm of vertical activity quantification was developed. The 3-D imaging system successfully detects white, black, spotted and agouti pigs in the open field test (OFT). No effect of sex or color on horizontal (distance run), vertical activities and time in the center was shown. Agouti pigs explored the arena more intensive than white or black animals, respectively. The OFT behavioral traits were compared with the fear reaction to experimenter. Time in the center of the OFT was positively correlated with fear reaction rank (ρ=0.21, p<0.05). Black pigs were significantly more fearful compared with white or agouti animals. The 3-D imaging system has three advantages over existing automated tracking systems: it avoids perspective distortion, distinguishes animals any color from any background and automatically evaluates vertical activity. The 3-D imaging system can be successfully applied for automated measurement of minipig behavior in neurobiological and psychopharmacological experiments. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Aalamifar, Fereshteh; Boctor, Emad M.
2016-04-01
Synthetic aperture for ultrasound is a technique utilizing a wide aperture in both transmit and receive to enhance the ultrasound image quality. The limitation of synthetic aperture is the maximum available aperture size limit determined by the physical size of ultrasound probe. We propose Synthetic-Tracked Aperture Ultrasound (STRATUS) imaging system to overcome the limitation by extending the beamforming aperture size through ultrasound probe tracking. With a setup involving a robotic arm, the ultrasound probe is moved using the robotic arm, while the positions on a scanning trajectory are tracked in real-time. Data from each pose are synthesized to construct a high resolution image. In previous studies, we have demonstrated the feasibility through phantom experiments. However, various additional factors such as real-time data collection or motion artifacts should be taken into account when the in vivo target becomes the subject. In this work, we build a robot-based STRATUS imaging system with continuous data collection capability considering the practical implementation. A curvilinear array is used instead of a linear array to benefit from its wider capture angle. We scanned human forearms under two scenarios: one submerged the arm in the water tank under 10 cm depth, and the other directly scanned the arm from the surface. The image contrast improved 5.51 dB, and 9.96 dB for the underwater scan and the direct scan, respectively. The result indicates the practical feasibility of STRATUS imaging system, and the technique can be potentially applied to the wide range of human body.
Conformal needle-based ultrasound ablation using EM-tracked conebeam CT image guidance
NASA Astrophysics Data System (ADS)
Burdette, E. Clif; Banovac, Filip; Diederich, Chris J.; Cheng, Patrick; Wilson, Emmanuel; Cleary, Kevin R.
2011-03-01
Numerous studies have demonstrated the efficacy of interstitial ablative approaches for the treatment of renal and hepatic tumors. Despite these promising results, current systems remain highly dependent on operator skill, and cannot treat many tumors because there is little control of the size and shape of the zone of necrosis, and no control over ablator trajectory within tissue once insertion has taken place. Additionally, tissue deformation and target motion make it extremely difficult to accurately place the ablator device into the target. Irregularly shaped target volumes typically require multiple insertions and several sequential thermal ablation procedures. This study demonstrated feasibility of spatially tracked image-guided conformal ultrasound (US) ablation for percutaneous directional ablation of diseased tissue. Tissue was prepared by suturing the liver within a pig belly and 1mm BBs placed to serve as needle targets. The image guided system used integrated electromagnetic tracking and cone-beam CT (CBCT) with conformable needlebased high-intensity US ablation in the interventional suite. Tomographic images from cone beam CT were transferred electronically to the image-guided tracking system (IGSTK). Paired-point registration was used to register the target specimen to CT images and enable navigation. Path planning is done by selecting the target BB on the GUI of the realtime tracking system and determining skin entry location until an optimal path is selected. Power was applied to create the desired ablation extent within 7-10 minutes at a thermal dose (>300eqm43). The system was successfully used to place the US ablator in planned target locations within ex-vivo kidney and liver through percutaneous access. Targeting accuracy was 3-4 mm. Sectioned specimens demonstrated uniform ablation within the planned target zone. Subsequent experiments were conducted for multiple ablator positions based upon treatment planning simulations. Ablation zones in liver were 73cc, 84cc, and 140cc for 3, 4, and 5 placements, respectively. These experiments demonstrate the feasibility of combining real-time spatially tracked image guidance with directional interstitial ultrasound ablation. Interstitial ultrasound ablation delivered on multiple needles permit the size and shape of the ablation zone to be "sculpted" by modifying the angle and intensity of the active US elements in the array. This paper summarizes the design and development of the first system incorporating thermal treatment planning and integration of a novel interstitial acoustic ablation device with integrated 3D electromagnetic tracking and guidance strategy.
A Freehand Ultrasound Elastography System with Tracking for In-vivo Applications
Foroughi, Pezhman; Kang, Hyun-Jae; Carnegie, Daniel A.; van Vledder, Mark G.; Choti, Michael A.; Hager, Gregory D.; Boctor, Emad M.
2012-01-01
Ultrasound transducers are commonly tracked in modern ultrasound navigation/guidance systems. In this paper, we demonstrate the advantages of incorporating tracking information into ultrasound elastography for clinical applications. First, we address a common limitation of freehand palpation: speckle decorrelation due to out-of-plane probe motion. We show that by automatically selecting pairs of radio frequency (RF) frames with minimal lateral and out-of-plane motions combined with a fast and robust displacement estimation technique greatly improves in-vivo elastography results. We also use tracking information and image quality measure to fuse multiple images with similar strain that are taken roughly from the same location to obtain a high quality elastography image. Finally, we show that tracking information can be used to give the user partial control over the rate of compression. Our methods are tested on tissue mimicking phantom and experiments have been conducted on intra-operative data acquired during animal and human experiments involving liver ablation. Our results suggest that in challenging clinical conditions, our proposed method produces reliable strain images and eliminates the need for a manual search through the ultrasound data in order to find RF pairs suitable for elastography. PMID:23257351
WE-G-213CD-03: A Dual Complementary Verification Method for Dynamic Tumor Tracking on Vero SBRT.
Poels, K; Depuydt, T; Verellen, D; De Ridder, M
2012-06-01
to use complementary cine EPID and gimbals log file analysis for in-vivo tracking accuracy monitoring. A clinical prototype of dynamic tracking (DT) was installed on the Vero SBRT system. This prototype version allowed tumor tracking by gimballed linac rotations using an internal-external correspondence model. The DT prototype software allowed the detailed logging of all applied gimbals rotations during tracking. The integration of an EPID on the vero system allowed the acquisition of cine EPID images during DT. We quantified the tracking error on cine EPID (E-EPID) by subtracting the target center (fiducial marker detection) and the field centroid. Dynamic gimbals log file information was combined with orthogonal x-ray verification images to calculate the in-vivo tracking error (E-kVLog). The correlation between E-kVLog and E-EPID was calculated for validation of the gimbals log file. Further, we investigated the sensitivity of the log file tracking error by introducing predefined systematic tracking errors. As an application we calculate gimbals log file tracking error for dynamic hidden target tests to investigate gravity effects and decoupled gimbals rotation from gantry rotation. Finally, calculating complementary cine EPID and log file tracking errors evaluated the clinical accuracy of dynamic tracking. A strong correlation was found between log file and cine EPID tracking error distribution during concurrent measurements (R=0.98). We found sensitivity in the gimbals log files to detect a systematic tracking error up to 0.5 mm. Dynamic hidden target tests showed no gravity influence on tracking performance and high degree of decoupled gimbals and gantry rotation during dynamic arc dynamic tracking. A submillimetric agreement between clinical complementary tracking error measurements was found. Redundancy of the internal gimbals log file with x-ray verification images with complementary independent cine EPID images was implemented to monitor the accuracy of gimballed tumor tracking on Vero SBRT. Research was financially supported by the Flemish government (FWO), Hercules Foundation and BrainLAB AG. © 2012 American Association of Physicists in Medicine.
A novel optical investigation technique for railroad track inspection and assessment
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Beale, Christopher H.; Niezrecki, Christopher
2017-04-01
Track failures due to cross tie degradation or loss in ballast support may result in a number of problems ranging from simple service interruptions to derailments. Structural Health Monitoring (SHM) of railway track is important for safety reasons and to reduce downtime and maintenance costs. For this reason, novel and cost-effective track inspection technologies for assessing tracks' health are currently insufficient and needed. Advancements achieved in recent years in cameras technology, optical sensors, and image-processing algorithms have made machine vision, Structure from Motion (SfM), and three-dimensional (3D) Digital Image Correlation (DIC) systems extremely appealing techniques for extracting structural deformations and geometry profiles. Therefore, optically based, non-contact measurement techniques may be used for assessing surface defects, rail and tie deflection profiles, and ballast condition. In this study, the design of two camera-based measurement systems is proposed for crossties-ballast condition assessment and track examination purposes. The first one consists of four pairs of cameras installed on the underside of a rail car to detect the induced deformation and displacement on the whole length of the track's cross tie using 3D DIC measurement techniques. The second consists of another set of cameras using SfM techniques for obtaining a 3D rendering of the infrastructure from a series of two-dimensional (2D) images to evaluate the state of the track qualitatively. The feasibility of the proposed optical systems is evaluated through extensive laboratory tests, demonstrating their ability to measure parameters of interest (e.g. crosstie's full-field displacement, vertical deflection, shape, etc.) for assessment and SHM of railroad track.
Rotational symmetric HMD with eye-tracking capability
NASA Astrophysics Data System (ADS)
Liu, Fangfang; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian
2016-10-01
As an important auxiliary function of head-mounted displays (HMDs), eye tracking has an important role in the field of intelligent human-machine interaction. In this paper, an eye-tracking HMD system (ET-HMD) is designed based on the rotational symmetric system. The tracking principle in this paper is based on pupil-corneal reflection. The ET-HMD system comprises three optical paths for virtual display, infrared illumination, and eye tracking. The display optics is shared by three optical paths and consists of four spherical lenses. For the eye-tracking path, an extra imaging lens is added to match the image sensor and achieve eye tracking. The display optics provides users a 40° diagonal FOV with a ״ 0.61 OLED, the 19 mm eye clearance, and 10 mm exit pupil diameter. The eye-tracking path can capture 15 mm × 15 mm of the users' eyes. The average MTF is above 0.1 at 26 lp/mm for the display path, and exceeds 0.2 at 46 lp/mm for the eye-tracking path. Eye illumination is simulated using LightTools with an eye model and an 850 nm near-infrared LED (NIR-LED). The results of the simulation show that the illumination of the NIR-LED can cover the area of the eye model with the display optics that is sufficient for eye tracking. The integrated optical system HMDs with eye-tracking feature can help improve the HMD experience of users.
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
A Fast MEANSHIFT Algorithm-Based Target Tracking System
Sun, Jian
2012-01-01
Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
NASA Technical Reports Server (NTRS)
Winston, R.; Welford, W. T.
1980-01-01
The paper discusses the paraboloidal mirror as a tracking solar concentrator, fitting a nonimaging second stage to the paraboloidal mirror, other image-forming systems as first stages, and tracking systems in two-dimensional geometry. Because of inherent aberrations, the paraboloidal mirror cannot achieve the thermodynamic limit. It is shown how paraboloidal mirrors of short focal ratio and similar systems can have their flux concentration enhanced to near the thermodynamic limit by the addition of nonimaging compound elliptical concentrators.
NASA Astrophysics Data System (ADS)
Winston, R.; Welford, W. T.
1980-02-01
The paper discusses the paraboloidal mirror as a tracking solar concentrator, fitting a nonimaging second stage to the paraboloidal mirror, other image-forming systems as first stages, and tracking systems in two-dimensional geometry. Because of inherent aberrations, the paraboloidal mirror cannot achieve the thermodynamic limit. It is shown how paraboloidal mirrors of short focal ratio and similar systems can have their flux concentration enhanced to near the thermodynamic limit by the addition of nonimaging compound elliptical concentrators.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
Control Method for Video Guidance Sensor System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Control method for video guidance sensor system
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
Fattori, Giovanni; Safai, Sairos; Carmona, Pablo Fernández; Peroni, Marta; Perrin, Rosalind; Weber, Damien Charles; Lomax, Antony John
2017-03-31
Motion monitoring is essential when treating non-static tumours with pencil beam scanned protons. 4D medical imaging typically relies on the detected body surface displacement, considered as a surrogate of the patient's anatomical changes, a concept similarly applied by most motion mitigation techniques. In this study, we investigate benefits and pitfalls of optical and electromagnetic tracking, key technologies for non-invasive surface motion monitoring, in the specific environment of image-guided, gantry-based proton therapy. Polaris SPECTRA optical tracking system and the Aurora V3 electromagnetic tracking system from Northern Digital Inc. (NDI, Waterloo, CA) have been compared both technically, by measuring tracking errors and system latencies under laboratory conditions, and clinically, by assessing their practicalities and sensitivities when used with imaging devices and PBS treatment gantries. Additionally, we investigated the impact of using different surrogate signals, from different systems, on the reconstructed 4D CT images. Even though in controlled laboratory conditions both technologies allow for the localization of static fiducials with sub-millimetre jitter and low latency (31.6 ± 1 msec worst case), significant dynamic and environmental distortions limit the potential of the electromagnetic approach in a clinical setting. The measurement error in case of close proximity to a CT scanner is up to 10.5 mm and precludes its use for the monitoring of respiratory motion during 4DCT acquisitions. Similarly, the motion of the treatment gantry distorts up to 22 mm the tracking result. Despite the line of sight requirement, the optical solution offers the best potential, being the most robust against environmental factors and providing the highest spatial accuracy. The significant difference in the temporal location of the reconstructed phase points is used to speculate on the need to apply the same monitoring system for imaging and treatment to ensure the consistency of detected phases.
NASA Astrophysics Data System (ADS)
O'Shea, Tuathan P.; Garcia, Leo J.; Rosser, Karen E.; Harris, Emma J.; Evans, Philip M.; Bamber, Jeffrey C.
2014-04-01
This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (˜2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high imaging rate capability.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Identification Of Cells With A Compact Microscope Imaging System With Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking mic?oscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Tracking target objects orbiting earth using satellite-based telescopes
De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J
2014-10-14
A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.
A high-speed tracking algorithm for dense granular media
NASA Astrophysics Data System (ADS)
Cerda, Mauricio; Navarro, Cristóbal A.; Silva, Juan; Waitukaitis, Scott R.; Mujica, Nicolás; Hitschfeld, Nancy
2018-06-01
Many fields of study, including medical imaging, granular physics, colloidal physics, and active matter, require the precise identification and tracking of particle-like objects in images. While many algorithms exist to track particles in diffuse conditions, these often perform poorly when particles are densely packed together-as in, for example, solid-like systems of granular materials. Incorrect particle identification can have significant effects on the calculation of physical quantities, which makes the development of more precise and faster tracking algorithms a worthwhile endeavor. In this work, we present a new tracking algorithm to identify particles in dense systems that is both highly accurate and fast. We demonstrate the efficacy of our approach by analyzing images of dense, solid-state granular media, where we achieve an identification error of 5% in the worst evaluated cases. Going further, we propose a parallelization strategy for our algorithm using a GPU, which results in a speedup of up to 10 × when compared to a sequential CPU implementation in C and up to 40 × when compared to the reference MATLAB library widely used for particle tracking. Our results extend the capabilities of state-of-the-art particle tracking methods by allowing fast, high-fidelity detection in dense media at high resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhont, J; Poels, K; Verellen, D
2015-06-15
Purpose: To evaluate the feasibility of markerless tumor tracking through the implementation of a novel dual-energy imaging approach into the clinical dynamic tracking (DT) workflow of the Vero SBRT system. Methods: Two sequential 20 s (11 Hz) fluoroscopy sequences were acquired at the start of one fraction for 7 patients treated for primary and metastatic lung cancer with DT on the Vero system. Sequences were acquired using 2 on-board kV imaging systems located at ±45° from the MV beam axis, at respectively 60 kVp (3.2 mAs) and 120 kVp (2.0 mAs). Offline, a normalized cross-correlation algorithm was applied to matchmore » the high (HE) and low energy (LE) images. Per breathing phase (inhale, exhale, maximum inhale and maximum exhale), the 5 best-matching HE and LE couples were extracted for DE subtraction. A contrast analysis according to gross tumor volume was conducted based on contrast-to-noise ratio (CNR). Improved tumor visibility was quantified using an improvement ratio. Results: Using the implanted fiducial as a benchmark, HE-LE sequence matching was effective for 13 out of 14 imaging angles. Overlying bony anatomy was removed on all DE images. With the exception of two imaging angles, the DE images showed no significantly improved tumor visibility compared to HE images, with an improvement ratio averaged over all patients of 1.46 ± 1.64. Qualitatively, it was observed that for those imaging angles that showed no significantly improved CNR, the tumor tissue could not be reliably visualized on neither HE nor DE images due to a total or partial overlap with other soft tissue. Conclusion: Dual-energy subtraction imaging by sequential orthogonal fluoroscopy was shown feasible by implementing an additional LE fluoroscopy sequence. However, for most imaging angles, DE images did not provide improved tumor visibility over single-energy images. Optimizing imaging angles is likely to improve tumor visibility and the efficacy of dual-energy imaging. This work was in part sponsored by corporate funding from BrainLAB AG.(BrainLAB AG, Feldkirchen, Germany)« less
NASA Astrophysics Data System (ADS)
Berbeco, Ross I.; Jiang, Steve B.; Sharp, Gregory C.; Chen, George T. Y.; Mostafavi, Hassan; Shirato, Hiroki
2004-01-01
The design of an integrated radiotherapy imaging system (IRIS), consisting of gantry mounted diagnostic (kV) x-ray tubes and fast read-out flat-panel amorphous-silicon detectors, has been studied. The system is meant to be capable of three main functions: radiographs for three-dimensional (3D) patient set-up, cone-beam CT and real-time tumour/marker tracking. The goal of the current study is to determine whether one source/panel pair is sufficient for real-time tumour/marker tracking and, if two are needed, the optimal position of each relative to other components and the isocentre. A single gantry-mounted source/imager pair is certainly capable of the first two of the three functions listed above and may also be useful for the third, if combined with prior knowledge of the target's trajectory. This would be necessary because only motion in two dimensions is visible with a single imager/source system. However, with previously collected information about the trajectory, the third coordinate may be derived from the other two with sufficient accuracy to facilitate tracking. This deduction of the third coordinate can only be made if the 3D tumour/marker trajectory is consistent from fraction to fraction. The feasibility of tumour tracking with one source/imager pair has been theoretically examined here using measured lung marker trajectory data for seven patients from multiple treatment fractions. The patients' selection criteria include minimum mean amplitudes of the tumour motions greater than 1 cm peak-to-peak. The marker trajectory for each patient was modelled using the first fraction data. Then for the rest of the data, marker positions were derived from the imager projections at various gantry angles and compared with the measured tumour positions. Our results show that, due to the three dimensionality and irregular trajectory characteristics of tumour motion, on a fraction-to-fraction basis, a 'monoscopic' system (single source/imager) is inadequate for consistent real-time tumour tracking, even with prior knowledge. We found that, among the seven patients studied with peak-to-peak marker motion greater than 1 cm, five cases have mean localization errors greater than 2 mm and two have mean errors greater than 3 mm. Because of this uncertainty associated with a monoscopic system, two source/imager pairs are necessary for robust 3D target localization. Dual orthogonal x-ray source/imager pairs mounted on the linac gantry are chosen for the IRIS. We further studied the placement of the x-ray sources/panel based on the geometric specifications of the Varian 21EX Clinac. The best configuration minimizes the localization error while maintaining a large field of view and avoiding collisions with the floor/ceiling or couch.
Shahriari, Navid; Hekman, Edsko; Oudkerk, Matthijs; Misra, Sarthak
2015-11-01
Percutaneous needle insertion procedures are commonly used for diagnostic and therapeutic purposes. Although current technology allows accurate localization of lesions, they cannot yet be precisely targeted. Lung cancer is the most common cause of cancer-related death, and early detection reduces the mortality rate. Therefore, suspicious lesions are tested for diagnosis by performing needle biopsy. In this paper, we have presented a novel computed tomography (CT)-compatible needle insertion device (NID). The NID is used to steer a flexible needle (φ0.55 mm) with a bevel at the tip in biological tissue. CT images and an electromagnetic (EM) tracking system are used in two separate scenarios to track the needle tip in three-dimensional space during the procedure. Our system uses a control algorithm to steer the needle through a combination of insertion and minimal number of rotations. Noise analysis of CT images has demonstrated the compatibility of the device. The results for three experimental cases (case 1: open-loop control, case 2: closed-loop control using EM tracking system and case 3: closed-loop control using CT images) are presented. Each experimental case is performed five times, and average targeting errors are 2.86 ± 1.14, 1.11 ± 0.14 and 1.94 ± 0.63 mm for case 1, case 2 and case 3, respectively. The achieved results show that our device is CT-compatible and it is able to steer a bevel-tipped needle toward a target. We are able to use intermittent CT images and EM tracking data to control the needle path in a closed-loop manner. These results are promising and suggest that it is possible to accurately target the lesions in real clinical procedures in the future.
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.
2016-03-01
Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.
Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien; Booth, Jeremy T.; Keall, Paul J.
2014-01-01
Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real time tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the Au+Ao was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the Au+Ao reductions were all above 75% and the total Au+Ao improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time. PMID:24877798
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien
Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real timemore » tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the A{sub u}+A{sub o} was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the A{sub u}+A{sub o} reductions were all above 75% and the total A{sub u}+A{sub o} improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time.« less
Multimode intravascular RF coil for MRI-guided interventions.
Kurpad, Krishna N; Unal, Orhan
2011-04-01
To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex
2015-07-01
Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.
Tracking formulas and strategies for a receiver oriented dual-axis tracking toroidal heliostat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Minghuan; Wang, Zhifeng; Liang, Wenfeng
2010-06-15
A 4 m x 4 m toroidal heliostat with receiver oriented dual-axis tracking, also called spinning-elevation tracking, was developed as an auxiliary heat source for a hydrogen production system. A series of spinning-elevation tracking formulas have been derived for this heliostat. This included basic tracking formulas, a formula for the elevation angle for heliostat with a mirror-pivot offset, and a more general formula for the biased elevation angle. This paper presents the new tracking formulas in detail and analyzes the accuracy of applying a simplifying approximation. The numerical results show these receiver oriented dual-axis tracking formula approximations are accurate tomore » within 2.5 x 10{sup -6} m in image plane. Some practical tracking strategies are discussed briefly. Solar images from the toroidal heliostat at selected times are also presented. (author)« less
Wang, Mengmeng; Ong, Lee-Ling Sharon; Dauwels, Justin; Asada, H Harry
2018-04-01
Cell migration is a key feature for living organisms. Image analysis tools are useful in studying cell migration in three-dimensional (3-D) in vitro environments. We consider angiogenic vessels formed in 3-D microfluidic devices (MFDs) and develop an image analysis system to extract cell behaviors from experimental phase-contrast microscopy image sequences. The proposed system initializes tracks with the end-point confocal nuclei coordinates. We apply convolutional neural networks to detect cell candidates and combine backward Kalman filtering with multiple hypothesis tracking to link the cell candidates at each time step. These hypotheses incorporate prior knowledge on vessel formation and cell proliferation rates. The association accuracy reaches 86.4% for the proposed algorithm, indicating that the proposed system is able to associate cells more accurately than existing approaches. Cell culture experiments in 3-D MFDs have shown considerable promise for improving biology research. The proposed system is expected to be a useful quantitative tool for potential microscopy problems of MFDs.
Yeo, Boon Y.; McLaughlin, Robert A.; Kirk, Rodney W.; Sampson, David D.
2012-01-01
We present a high-resolution three-dimensional position tracking method that allows an optical coherence tomography (OCT) needle probe to be scanned laterally by hand, providing the high degree of flexibility and freedom required in clinical usage. The method is based on a magnetic tracking system, which is augmented by cross-correlation-based resampling and a two-stage moving window average algorithm to improve upon the tracker's limited intrinsic spatial resolution, achieving 18 µm RMS position accuracy. A proof-of-principle system was developed, with successful image reconstruction demonstrated on phantoms and on ex vivo human breast tissue validated against histology. This freehand scanning method could contribute toward clinical implementation of OCT needle imaging. PMID:22808429
Development of a Sunspot Tracking System
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1998-01-01
Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.
Application of unscented Kalman filter for robust pose estimation in image-guided surgery
NASA Astrophysics Data System (ADS)
Vaccarella, Alberto; De Momi, Elena; Valenti, Marta; Ferrigno, Giancarlo; Enquobahrie, Andinet
2012-02-01
Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid on top of preoperative images of the patient during surgery. The most commonly used localization systems in the Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However, OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues. This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.
Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.
Silverstein, Evan; Snyder, Michael
2018-05-01
To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Three-dimensional tracking and imaging laser scanner for space operations
NASA Astrophysics Data System (ADS)
Laurin, Denis G.; Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc
1999-05-01
This paper presents the development of a laser range scanner (LARS) as a three-dimensional sensor for space applications. The scanner is a versatile system capable of doing surface imaging, target ranging and tracking. It is capable of short range (0.5 m to 20 m) and long range (20 m to 10 km) sensing using triangulation and time-of-flight (TOF) methods respectively. At short range (1 m), the resolution is sub-millimeter and drops gradually with distance (2 cm at 10 m). For long range, the TOF provides a constant resolution of plus or minus 3 cm, independent of range. The LARS could complement the existing Canadian Space Vision System (CSVS) for robotic manipulation. As an active vision system, the LARS is immune to sunlight and adverse lighting; this is a major advantage over the CSVS, as outlined in this paper. The LARS could also replace existing radar systems used for rendezvous and docking. There are clear advantages of an optical system over a microwave radar in terms of size, mass, power and precision. Equipped with two high-speed galvanometers, the laser can be steered to address any point in a 30 degree X 30 degree field of view. The scanning can be continuous (raster scan, Lissajous) or direct (random). This gives the scanner the ability to register high-resolution 3D images of range and intensity (up to 4000 X 4000 pixels) and to perform point target tracking as well as object recognition and geometrical tracking. The imaging capability of the scanner using an eye-safe laser is demonstrated. An efficient fiber laser delivers 60 mW of CW or 3 (mu) J pulses at 20 kHz for TOF operation. Implementation of search and track of multiple targets is also demonstrated. For a single target, refresh rates up to 137 Hz is possible. Considerations for space qualification of the scanner are discussed. Typical space operations, such as docking, object attitude tracking, and inspections are described.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
Voltage-based device tracking in a 1.5 Tesla MRI during imaging: initial validation in swine models.
Schmidt, Ehud J; Tse, Zion T H; Reichlin, Tobias R; Michaud, Gregory F; Watkins, Ronald D; Butts-Pauly, Kim; Kwong, Raymond Y; Stevenson, William; Schweitzer, Jeffrey; Byrd, Israel; Dumoulin, Charles L
2014-03-01
Voltage-based device-tracking (VDT) systems are commonly used for tracking invasive devices in electrophysiological cardiac-arrhythmia therapy. During electrophysiological procedures, electro-anatomic mapping workstations provide guidance by integrating VDT location and intracardiac electrocardiogram information with X-ray, computerized tomography, ultrasound, and MR images. MR assists navigation, mapping, and radiofrequency ablation. Multimodality interventions require multiple patient transfers between an MRI and the X-ray/ultrasound electrophysiological suite, increasing the likelihood of patient-motion and image misregistration. An MRI-compatible VDT system may increase efficiency, as there is currently no single method to track devices both inside and outside the MRI scanner. An MRI-compatible VDT system was constructed by modifying a commercial system. Hardware was added to reduce MRI gradient-ramp and radiofrequency unblanking pulse interference. VDT patches and cables were modified to reduce heating. Five swine cardiac VDT electro-anatomic mapping interventions were performed, navigating inside and thereafter outside the MRI. Three-catheter VDT interventions were performed at >12 frames per second both inside and outside the MRI scanner with <3 mm error. Catheters were followed on VDT- and MRI-derived maps. Simultaneous VDT and imaging was possible in repetition time >32 ms sequences with <0.5 mm errors, and <5% MRI signal-to-noise ratio (SNR) loss. At shorter repetition times, only intracardiac electrocardiogram was reliable. Radiofrequency heating was <1.5°C. An MRI-compatible VDT system is feasible. Copyright © 2013 Wiley Periodicals, Inc.
Voltage-based Device Tracking in a 1.5 Tesla MRI during Imaging: Initial validation in swine models
Schmidt, Ehud J; Tse, Zion TH; Reichlin, Tobias R; Michaud, Gregory F; Watkins, Ronald D; Butts-Pauly, Kim; Kwong, Raymond Y; Stevenson, William; Schweitzer, Jeffrey; Byrd, Israel; Dumoulin, Charles L
2013-01-01
Purpose Voltage-based device-tracking (VDT) systems are commonly used for tracking invasive devices in electrophysiological (EP) cardiac-arrhythmia therapy. During EP procedures, electro-anatomic-mapping (EAM) workstations provide guidance by integrating VDT location and intra-cardiac-ECG information with X-ray, CT, Ultrasound, and MR images. MR assists navigation, mapping and radio-frequency-ablation. Multi-modality interventions require multiple patient transfers between an MRI and the X-ray/ultrasound EP suite, increasing the likelihood of patient-motion and image mis-registration. An MRI-compatible VDT system may increase efficiency, since there is currently no single method to track devices both inside and outside the MRI scanner. Methods An MRI-compatible VDT system was constructed by modifying a commercial system. Hardware was added to reduce MRI gradient-ramp and radio-frequency-unblanking-pulse interference. VDT patches and cables were modified to reduce heating. Five swine cardiac VDT EAM-mapping interventions were performed, navigating inside and thereafter outside the MRI. Results Three-catheter VDT interventions were performed at >12 frames-per-second both inside and outside the MRI scanner with <3mm error. Catheters were followed on VDT- and MRI-derived maps. Simultaneous VDT and imaging was possible in repetition-time (TR) >32 msec sequences with <0.5mm errors, and <5% MRI SNR loss. At shorter TRs, only intra-cardiac-ECG was reliable. RF Heating was <1.5C°. Conclusion An MRI-compatible VDT system is feasible. PMID:23580479
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Y; OBrien, R; Shieh, C
2014-06-15
Purpose: Intrafraction tumor deformation limits targeting accuracy in radiotherapy and cannot be adapted to by current motion management techniques. This study simulated intrafractional treatment adaptation to tumor deformations using a dynamic Multi-Leaf Collimator (DMLC) tracking system during Intensity-modulated radiation therapy (IMRT) treatment for the first time. Methods: The DMLC tracking system was developed to adapt to the intrafraction tumor deformation by warping the planned beam aperture guided by the calculated deformation vector field (DVF) obtained from deformable image registration (DIR) at the time of treatment delivery. Seven single phantom deformation images up to 10.4 mm deformation and eight tumor systemmore » phantom deformation images up to 21.5 mm deformation were acquired and used in tracking simulation. The intrafraction adaptation was simulated at the DMLC tracking software platform, which was able to communicate with the image registration software, reshape the instantaneous IMRT field aperture and log the delivered MLC fields.The deformation adaptation accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the reference aperture. The incremental deformations were arbitrarily determined to take place equally over the delivery interval. The geometric target coverage of delivery with deformation adaptation was compared against the delivery without adaptation. Results: Intrafraction deformation adaptation during dynamic IMRT plan delivery was simulated for single and system deformable phantoms. For the two particular delivery situations, over the treatment course, deformation adaptation improved the target coverage by 89% for single target deformation and 79% for tumor system deformation compared with no-tracking delivery. Conclusion: This work demonstrated the principle of real-time tumor deformation tracking using a DMLC. This is the first step towards the development of an image-guided radiotherapy system to treat deforming tumors in real-time. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship, Cure Cancer Australia Foundation, NHMRC Project Grant APP1042375 and US NIH/NCI R01CA93626.« less
MRI-guided tumor tracking in lung cancer radiotherapy
NASA Astrophysics Data System (ADS)
Cerviño, Laura I.; Du, Jiang; Jiang, Steve B.
2011-07-01
Precise tracking of lung tumor motion during treatment delivery still represents a challenge in radiation therapy. Prototypes of MRI-linac hybrid systems are being created which have the potential of ionization-free real-time imaging of the tumor. This study evaluates the performance of lung tumor tracking algorithms in cine-MRI sagittal images from five healthy volunteers. Visible vascular structures were used as targets. Volunteers performed several series of regular and irregular breathing. Two tracking algorithms were implemented and evaluated: a template matching (TM) algorithm in combination with surrogate tracking using the diaphragm (surrogate was used when the maximum correlation between the template and the image in the search window was less than specified), and an artificial neural network (ANN) model based on the principal components of a region of interest that encompasses the target motion. The mean tracking error ē and the error at 95% confidence level e95 were evaluated for each model. The ANN model led to ē = 1.5 mm and e95 = 4.2 mm, while TM led to ē = 0.6 mm and e95 = 1.0 mm. An extra series was considered separately to evaluate the benefit of using surrogate tracking in combination with TM when target out-of-plane motion occurs. For this series, the mean error was 7.2 mm using only TM and 1.7 mm when the surrogate was used in combination with TM. Results show that, as opposed to tracking with other imaging modalities, ANN does not perform well in MR-guided tracking. TM, however, leads to highly accurate tracking. Out-of-plane motion could be addressed by surrogate tracking using the diaphragm, which can be easily identified in the images.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
3D terrain reconstruction using Chang’E-3 PCAM images
NASA Astrophysics Data System (ADS)
Chen, Wangli; Zeng, Xingguo; Zhang, Hongbo
2017-10-01
In order to improve understanding of the topography of Chang’E-3 landing site, 3D terrain models are reconstructed using PCMA images. PCAM (panoramic cameras) is a stereo camera system with a 27cm baseline on-board Yutu rover. It obtained panoramic images at four detection sites, and can achieve a resolution of 1.48mm/pixel at 10m. So the PCAM images reveal fine details of the detection region. In the method, SIFT is employed for feature description and feature matching. In addition to collinearity equations, the measure of baseline of the stereo system is also used in bundle adjustment to solve orientation parameters of all images. And then, pair-wise depth map computation is applied for dense surface reconstruction. Finally, DTM of the detection region is generated. The DTM covers an area with radius of about 20m, and centering at the location of the camera. In consequence of the design, each individual wheel of Yutu rover can leave three tracks on the surface of moon, and the width between the first and third track is 15cm, and these tracks are clear and distinguishable in images. So we chose the second detection site which is of the best ability of recognition of wheel tracks to evaluate the accuracy of the DTM. We measured the width of wheel tracks every 1.5m from the center of the detection region, and obtained 13 measures. It is noticed that the area where wheel tracks are ambiguous is avoided. Result shows that the mean value of wheel track width is 0.155m with a standard deviation of 0.007m. Generally, the closer to the center the more accurate the measure of wheel width is. This is due to the fact that the deformation of images aggravates with increase distance from the location of the camera, and this induces the decline of DTM quality in far areas. In our work, images of the four detection sites are adjusted independently, and this means that there is no tie point between different sites. So deviations between the locations of the same object measured from DTMs of adjacent detection sites may exist.
Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology.
Chung, Shao-Wen; Shih, Cho-Chiang; Huang, Chih-Chung
2017-02-01
Ultrasound imaging has been extensively used for determining the severity of carotid atherosclerotic stenosis. In particular, the morphological characterization of carotid plaques can be performed for risk stratification of patients. However, using 2D ultrasound imaging for detecting morphological changes in plaques has several limitations. Due to the scan was performed on a single longitudinal cross-section, the selected 2D image is difficult to represent the entire morphology and volume of plaque and vessel lumen. In addition, the precise positions of 2D ultrasound images highly depend on the radiologists' experience, it makes the serial long-term exams of anti-atherosclerotic therapies are difficult to relocate the same corresponding planes by using 2D B-mode images. This has led to the recent development of three-dimensional (3D) ultrasound imaging, which offers improved visualization and quantification of complex morphologies of carotid plaques. In the present study, a freehand 3D ultrasound imaging technique based on optical motion tracking technology is proposed. Unlike other optical tracking systems, the marker is a small rigid body that is attached to the ultrasound probe and is tracked by eight high-performance digital cameras. The probe positions in 3D space coordinates are then calibrated at spatial and temporal resolutions of 10μm and 0.01s, respectively. The image segmentation procedure involves Otsu's and the active contour model algorithms and accurately detects the contours of the carotid arteries. The proposed imaging technique was verified using normal artery and atherosclerotic stenosis phantoms. Human experiments involving freehand scanning of the carotid artery of a volunteer were also performed. The results indicated that compared with manual segmentation, the lowest percentage errors of the proposed segmentation procedure were 7.8% and 9.1% for the external and internal carotid arteries, respectively. Finally, the effect of handshaking was calibrated using the optical tracking system for reconstructing a 3D image. Copyright © 2016 Elsevier B.V. All rights reserved.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
IRLooK: an advanced mobile infrared signature measurement, data reduction, and analysis system
NASA Astrophysics Data System (ADS)
Cukur, Tamer; Altug, Yelda; Uzunoglu, Cihan; Kilic, Kayhan; Emir, Erdem
2007-04-01
Infrared signature measurement capability has a key role in the electronic warfare (EW) self protection systems' development activities. In this article, the IRLooK System and its capabilities will be introduced. IRLooK is a truly innovative mobile infrared signature measurement system with all its design, manufacturing and integration accomplished by an engineering philosophy peculiar to ASELSAN. IRLooK measures the infrared signatures of military and civil platforms such as fixed/rotary wing aircrafts, tracked/wheeled vehicles and navy vessels. IRLooK has the capabilities of data acquisition, pre-processing, post-processing, analysis, storing and archiving over shortwave, mid-wave and long wave infrared spectrum by means of its high resolution radiometric sensors and highly sophisticated software analysis tools. The sensor suite of IRLooK System includes imaging and non-imaging radiometers and a spectroradiometer. Single or simultaneous multiple in-band measurements as well as high radiant intensity measurements can be performed. The system provides detailed information on the spectral, spatial and temporal infrared signature characteristics of the targets. It also determines IR Decoy characteristics. The system is equipped with a high quality field proven two-axes tracking mount to facilitate target tracking. Manual or automatic tracking is achieved by using a passive imaging tracker. The system also includes a high quality weather station and field-calibration equipment including cavity and extended area blackbodies. The units composing the system are mounted on flat-bed trailers and the complete system is designed to be transportable by large body aircraft.
Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji
2015-08-01
To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon
Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. Results: The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. Conclusions: A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.« less
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter
NASA Astrophysics Data System (ADS)
Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.
1991-06-01
We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.
NASA Astrophysics Data System (ADS)
Bauer, Daniel R.; Olafsson, Ragnar; Montilla, Leonardo G.; Witte, Russell S.
2010-02-01
Understanding the tumor microenvironment is critical to characterizing how cancers operate and predicting how they will eventually respond to treatment. The mouse window chamber model is an excellent tool for cancer research, because it enables high resolution tumor imaging and cross-validation using multiple modalities. We describe a novel multimodality imaging system that incorporates three dimensional (3D) photoacoustics with pulse echo ultrasound for imaging the tumor microenvironment and tracking tissue growth in mice. Three mice were implanted with a dorsal skin flap window chamber. PC-3 prostate tumor cells, expressing green fluorescent protein (GFP), were injected into the skin. The ensuing tumor invasion was mapped using photoacoustic and pulse echo imaging, as well as optical and fluorescent imaging for comparison and cross validation. The photoacoustic imaging and spectroscopy system, consisting of a tunable (680-1000nm) pulsed laser and 25 MHz ultrasound transducer, revealed near infrared absorbing regions, primarily blood vessels. Pulse echo images, obtained simultaneously, provided details of the tumor microstructure and growth with 100-μm3 resolution. The tumor size in all three mice increased between three and five fold during 3+ weeks of imaging. Results were consistent with the optical and fluorescent images. Photoacoustic imaging revealed detailed maps of the tumor vasculature, whereas photoacoustic spectroscopy identified regions of oxygenated and deoxygenated blood vessels. The 3D photoacoustic and pulse echo imaging system provided complementary information to track the tumor microenvironment, evaluate new cancer therapies, and develop molecular imaging agents in vivo. Finally, these safe and noninvasive techniques are potentially applicable for human cancer imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, W; Damato, A; Viswanathan, A
2014-06-15
Purpose: To develop a novel active MR-tracking system which can provide accurate and rapid localization of brachytherapy catheters, and assess its reliability and spatial accuracy in comparison to standard catheter digitization using MR images. Methods: An active MR tracker for brachytherapy was constructed by adding three printed-circuit micro-coils to the shaft of a commercial metallic stylet. A gel phantom with an embedded framework was built, into which fifteen 14-Gauge catheters were placed, following either with parallel or crossed paths. The tracker was inserted sequentially into each catheter, with MR-tracking running continuously. Tracking was also performed during the tracker's removal frommore » each catheter. Catheter trajectories measured from the insertion and the removal procedures using the same micro-coil were compared, as well as trajectories obtained using different micro-coils. A 3D high-resolution MR image dataset of the phantom was acquired and imported into a treatment planning system (TPS) for catheter digitization. A comparison between MR-tracked positions and positions digitized from MR images by TPS was performed. Results: The MR tracking shows good consistency for varying catheter paths and for all micro-coils (mean difference ∼1.1 mm). The average distance between the MR-tracking trajectory and catheter digitization from the MR images was 1.1 mm. Ambiguity in catheter assignment from images due to crossed paths was resolved by active tracking. When tracking was interleaved with imaging, real-time images were continuously acquired at the instantaneous tip positions and displayed on an external workstation. Conclusion: The active MR tracker may be used to provide an independent measurement of catheter location in the MR environment, potentially eliminating the need for subsequent CT. It may also be used to control realtime imaging of catheter placement. This will enable MR-based brachytherapy planning of interstitial implants without ionizing radiation, with the potential to enable dosimetric guidance of catheter placement. We gratefully acknowledge support from the American Heart Association SDG 10SDG2610139, NIH 1R21CA158987-01A1, U41-RR019703, and R21 CA 167800, as well as a BWH Department of Radiation Oncology post-doctoral fellowship support. Li Pan and Wesley Gilson are employees of Siemens Corporation, Corporate Technology. Ravi Seethamraju is an employee of Siemens Healthcare.« less
NASA Astrophysics Data System (ADS)
Dimiduk, D.; Caylor, M.; Williamson, D.; Larson, L.
1995-01-01
The High Altitude Balloon Experiment demonstration of Acquisition, Tracking, and Pointing (HABE-ATP) is a system built around balloon-borne payload which is carried to a nominal 26-km altitude. The goal is laser tracking thrusting theater and strategic missiles, and then pointing a surrogate laser weapon beam, with performance levels end a timeline traceable to operational laser weapon system requirements. This goal leads to an experiment system design which combines hardware from many technology areas: an optical telescope and IR sensors; an advanced angular inertial reference; a flexible multi-level of actuation digital control system; digital tracking processors which incorporate real-time image analysis and a pulsed, diode-pumped solid state tracking laser. The system components have been selected to meet the overall experiment goals of tracking unmodified boosters at 50- 200 km range. The ATP system on HABE must stabilize and control a relative line of sight between the platform and the unmodified target booster to a 1 microrad accuracy. The angular pointing reference system supports both open loop and closed loop track modes; GPS provides absolute position reference. The control system which positions the line of sight for the ATP system must sequence through accepting a state vector handoff, closed-loop passive IR acquisition, passive IR intermediate fine track, active fine track, and then finally aimpoint determination and maintenance modes. Line of sight stabilization to fine accuracy levels is accomplished by actuating wide bandwidth fast steering mirrors (FSM's). These control loops off-load large-amplitude errors to the outer gimbal in order to remain within the limited angular throw of the FSM's. The SWIR acquisition and MWIR intermediate fine track sensors (both PtSi focal planes) image the signature of the rocket plume. After Hard Body Handover (HBHO), active fine tracking is conducted with a visible focal plane viewing the laser-illuminated target rocket body. The track and fire control performance must be developed to the point that an aimpoint can be selected, maintained, and then track performance scored with a low-power 'surrogate' weapon beam. Extensive instrumentation monitors not only the optical sensors and the video data, but all aspects of each of the experiment subsystems such as the control system, the experiment flight vehicle, and the tracker. Because the system is balloon-borne and recoverable, it is expected to fly many times during its development program.
Object Acquisition and Tracking for Space-Based Surveillance
1991-11-27
on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect , and can...smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Edge-following algorithm for tracking geological features
NASA Technical Reports Server (NTRS)
Tietz, J. C.
1977-01-01
Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.
Long-Term Tracking of a Specific Vehicle Using Airborne Optical Camera Systems
NASA Astrophysics Data System (ADS)
Kurz, F.; Rosenbaum, D.; Runge, H.; Cerra, D.; Mattyus, G.; Reinartz, P.
2016-06-01
In this paper we present two low cost, airborne sensor systems capable of long-term vehicle tracking. Based on the properties of the sensors, a method for automatic real-time, long-term tracking of individual vehicles is presented. This combines the detection and tracking of the vehicle in low frame rate image sequences and applies the lagged Cell Transmission Model (CTM) to handle longer tracking outages occurring in complex traffic situations, e.g. tunnels. The CTM model uses the traffic conditions in the proximities of the target vehicle and estimates its motion to predict the position where it reappears. The method is validated on an airborne image sequence acquired from a helicopter. Several reference vehicles are tracked within a range of 500m in a complex urban traffic situation. An artificial tracking outage of 240m is simulated, which is handled by the CTM. For this, all the vehicles in the close proximity are automatically detected and tracked to estimate the basic density-flow relations of the CTM model. Finally, the real and simulated trajectories of the reference vehicles in the outage are compared showing good correspondence also in congested traffic situations.
Precision targeting with a tracking adaptive optics scanning laser ophthalmoscope
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Noojin, Gary D.; Stolarski, David J.; Hodnett, Harvey M.; Imholte, Michelle L.; Kumru, Semih S.; McCall, Michelle N.; Toth, Cynthia A.; Rockwell, Benjamin A.
2006-02-01
Precise targeting of retinal structures including retinal pigment epithelial cells, feeder vessels, ganglion cells, photoreceptors, and other cells important for light transduction may enable earlier disease intervention with laser therapies and advanced methods for vision studies. A novel imaging system based upon scanning laser ophthalmoscopy (SLO) with adaptive optics (AO) and active image stabilization was designed, developed, and tested in humans and animals. An additional port allows delivery of aberration-corrected therapeutic/stimulus laser sources. The system design includes simultaneous presentation of non-AO, wide-field (~40 deg) and AO, high-magnification (1-2 deg) retinal scans easily positioned anywhere on the retina in a drag-and-drop manner. The AO optical design achieves an error of <0.45 waves (at 800 nm) over +/-6 deg on the retina. A MEMS-based deformable mirror (Boston Micromachines Inc.) is used for wave-front correction. The third generation retinal tracking system achieves a bandwidth of greater than 1 kHz allowing acquisition of stabilized AO images with an accuracy of ~10 μm. Normal adult human volunteers and animals with previously-placed lesions (cynomolgus monkeys) were tested to optimize the tracking instrumentation and to characterize AO imaging performance. Ultrafast laser pulses were delivered to monkeys to characterize the ability to precisely place lesions and stimulus beams. Other advanced features such as real-time image averaging, automatic highresolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an important tool to clinicians and researchers for early detection and treatment of retinal diseases.
NASA Astrophysics Data System (ADS)
Guo, Bing; Documet, Jorge; Liu, Brent; King, Nelson; Shrestha, Rasu; Wang, Kevin; Huang, H. K.; Grant, Edward G.
2006-03-01
The paper describes the methodology for the clinical design and implementation of a Location Tracking and Verification System (LTVS) that has distinct benefits for the Imaging Department at the Healthcare Consultation Center II (HCCII), an outpatient imaging facility located on the USC Health Science Campus. A novel system for tracking and verification of patients and staff in a clinical environment using wireless and facial biometric technology to monitor and automatically identify patients and staff was developed in order to streamline patient workflow, protect against erroneous examinations and create a security zone to prevent and audit unauthorized access to patient healthcare data under the HIPAA mandate. This paper describes the system design and integration methodology based on initial clinical workflow studies within a clinical environment. An outpatient center was chosen as an initial first step for the development and implementation of this system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yang-Kyun; Sharp, Gregory C.; Gierga, David P.
2015-06-15
Purpose: Real-time kV projection streaming capability has become recently available for Elekta XVI version 5.0. This study aims to investigate the feasibility and accuracy of real-time fiducial marker tracking during CBCT acquisition with or without simultaneous VMAT delivery using a conventional Elekta linear accelerator. Methods: A client computer was connected to an on-board kV imaging system computer, and receives and processes projection images immediately after image acquisition. In-house marker tracking software based on FFT normalized cross-correlation was developed and installed in the client computer. Three gold fiducial markers with 3 mm length were implanted in a pelvis-shaped phantom with 36more » cm width. The phantom was placed on a programmable motion platform oscillating in anterior-posterior and superior-inferior directions simultaneously. The marker motion was tracked in real-time for (1) a kV-only CBCT scan with treatment beam off and (2) a kV CBCT scan during a 6-MV VMAT delivery. The exposure parameters per projection were 120 kVp and 1.6 mAs. Tracking accuracy was assessed by comparing superior-inferior positions between the programmed and tracked trajectories. Results: The projection images were successfully transferred to the client computer at a frequency of about 5 Hz. In the kV-only scan, highly accurate marker tracking was achieved over the entire range of cone-beam projection angles (detection rate / tracking error were 100.0% / 0.6±0.5 mm). In the kV-VMAT scan, MV-scatter degraded image quality, particularly for lateral projections passing through the thickest part of the phantom (kV source angle ranging 70°-110° and 250°-290°), resulting in a reduced detection rate (90.5%). If the lateral projections are excluded, tracking performance was comparable to the kV-only case (detection rate / tracking error were 100.0% / 0.8±0.5 mm). Conclusion: Our phantom study demonstrated a promising Result for real-time motion tracking using a conventional Elekta linear accelerator. MV-scatter suppression is needed to improve tracking accuracy during MV delivery. This research is funded by Motion Management Research Grant from Elekta.« less
Chong, Kok-Keong; Wong, Chee-Woon; Siaw, Fei-Lu; Yew, Tiong-Keat; Ng, See-Seng; Liang, Meng-Suan; Lim, Yun-Seng; Lau, Sing-Liong
2009-01-01
A novel on-axis general sun-tracking formula has been integrated in the algorithm of an open-loop sun-tracking system in order to track the sun accurately and cost effectively. Sun-tracking errors due to installation defects of the 25 m2 prototype solar concentrator have been analyzed from recorded solar images with the use of a CCD camera. With the recorded data, misaligned angles from ideal azimuth-elevation axes have been determined and corrected by a straightforward changing of the parameters' values in the general formula of the tracking algorithm to improve the tracking accuracy to 2.99 mrad, which falls below the encoder resolution limit of 4.13 mrad. PMID:22408483
Coded excitation ultrasonic needle tracking: An in vivo study.
Xia, Wenfeng; Ginsberg, Yuval; West, Simeon J; Nikitichev, Daniil I; Ourselin, Sebastien; David, Anna L; Desjardins, Adrien E
2016-07-01
Accurate and efficient guidance of medical devices to procedural targets lies at the heart of interventional procedures. Ultrasound imaging is commonly used for device guidance, but determining the location of the device tip can be challenging. Various methods have been proposed to track medical devices during ultrasound-guided procedures, but widespread clinical adoption has remained elusive. With ultrasonic tracking, the location of a medical device is determined by ultrasonic communication between the ultrasound imaging probe and a transducer integrated into the medical device. The signal-to-noise ratio (SNR) of the transducer data is an important determinant of the depth in tissue at which tracking can be performed. In this paper, the authors present a new generation of ultrasonic tracking in which coded excitation is used to improve the SNR without spatial averaging. A fiber optic hydrophone was integrated into the cannula of a 20 gauge insertion needle. This transducer received transmissions from the ultrasound imaging probe, and the data were processed to obtain a tracking image of the needle tip. Excitation using Barker or Golay codes was performed to improve the SNR, and conventional bipolar excitation was performed for comparison. The performance of the coded excitation ultrasonic tracking system was evaluated in an in vivo ovine model with insertions to the brachial plexus and the uterine cavity. Coded excitation significantly increased the SNRs of the tracking images, as compared with bipolar excitation. During an insertion to the brachial plexus, the SNR was increased by factors of 3.5 for Barker coding and 7.1 for Golay coding. During insertions into the uterine cavity, these factors ranged from 2.9 to 4.2 for Barker coding and 5.4 to 8.5 for Golay coding. The maximum SNR was 670, which was obtained with Golay coding during needle withdrawal from the brachial plexus. Range sidelobe artifacts were observed in tracking images obtained with Barker coded excitation, and they were visually absent with Golay coded excitation. The spatial tracking accuracy was unaffected by coded excitation. Coded excitation is a viable method for improving the SNR in ultrasonic tracking without compromising spatial accuracy. This method provided SNR increases that are consistent with theoretical expectations, even in the presence of physiological motion. With the ultrasonic tracking system in this study, the SNR increases will have direct clinical implications in a broad range of interventional procedures by improving visibility of medical devices at large depths.
Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I
2017-07-01
Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.
Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter
2015-01-01
Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system – even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277
NASA Astrophysics Data System (ADS)
Seeto, Wen Jun; Lipke, Elizabeth Ann
2016-03-01
Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.
An open-source framework for testing tracking devices using Lego Mindstorms
NASA Astrophysics Data System (ADS)
Jomier, Julien; Ibanez, Luis; Enquobahrie, Andinet; Pace, Danielle; Cleary, Kevin
2009-02-01
In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.
Larsson, Matilda; Heyde, Brecht; Kremer, Florence; Brodin, Lars-Åke; D'hooge, Jan
2015-02-01
Ultrasound speckle tracking for carotid strain assessment has in the past decade gained interest in studies of arterial stiffness and cardiovascular diseases. The aim of this study was to validate and directly contrast carotid strain assessment by speckle tracking applied on clinical and high-frequency ultrasound images in vitro. Four polyvinyl alcohol phantoms mimicking the carotid artery were constructed with different mechanical properties and connected to a pump generating carotid flow profiles. Gray-scale ultrasound long- and short-axis images of the phantoms were obtained using a standard clinical ultrasound system, Vivid 7 (GE Healthcare, Horten, Norway) and a high-frequency ultrasound system, Vevo 2100 (FUJIFILM, VisualSonics, Toronto, Canada) with linear-array transducers (12L/MS250). Radial, longitudinal and circumferential strains were estimated using an in-house speckle tracking algorithm and compared with reference strain acquired by sonomicrometry. Overall, the estimated strain corresponded well with the reference strain. The correlation between estimated peak strain in clinical ultrasound images and reference strain was 0.91 (p<0.001) for radial strain, 0.73 (p<0.001) for longitudinal strain and 0.90 (p<0.001) for circumferential strain and for high-frequency ultrasound images 0.95 (p<0.001) for radial strain, 0.93 (p<0.001) for longitudinal strain and 0.90 (p<0.001) for circumferential strain. A significant larger bias and root mean square error was found for circumferential strain estimation on clinical ultrasound images compared to high frequency ultrasound images, but no significant difference in bias and root mean square error was found for radial and longitudinal strain when comparing estimation on clinical and high-frequency ultrasound images. The agreement between sonomicrometry and speckle tracking demonstrates that carotid strain assessment by ultrasound speckle tracking is feasible. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
OpenCV and TYZX : video surveillance for tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jim; Spencer, Andrew; Chu, Eric
2008-08-01
As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less
Güler, Özgür; Yaniv, Ziv
2012-01-01
Teaching the key technical aspects of image-guided interventions using a hands-on approach is a challenging task. This is primarily due to the high cost and lack of accessibility to imaging and tracking systems. We provide a software and data infrastructure which addresses both challenges. Our infrastructure allows students, patients, and clinicians to develop an understanding of the key technologies by using them, and possibly by developing additional components and integrating them into a simple navigation system which we provide. Our approach requires minimal hardware, LEGO blocks to construct a phantom for which we provide CT scans, and a webcam which when combined with our software provides the functionality of a tracking system. A premise of this approach is that tracking accuracy is sufficient for our purpose. We evaluate the accuracy provided by a consumer grade webcam and show that it is sufficient for educational use. We provide an open source implementation of all the components required for a basic image-guided navigation as part of the Image-Guided Surgery Toolkit (IGSTK). It has long been known that in education there is no substitute for hands-on experience, to quote Sophocles, "One must learn by doing the thing; for though you think you know it, you have no certainty, until you try.". Our work provides this missing capability in the context of image-guided navigation. Enabling a wide audience to learn and experience the use of a navigation system.
Multi-mode Intravascular RF Coil for MRI-guided Interventions
Kurpad, Krishna N.; Unal, Orhan
2011-01-01
Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969
The seam visual tracking method for large structures
NASA Astrophysics Data System (ADS)
Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong
2017-10-01
In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
Wei, Peng-Hu; Cong, Fei; Chen, Ge; Li, Ming-Chu; Yu, Xin-Guang; Bao, Yu-Hai
2017-02-01
Diffusion tensor imaging-based navigation is unable to resolve crossing fibers or to determine with accuracy the fanning, origin, and termination of fibers. It is important to improve the accuracy of localizing white matter fibers for improved surgical approaches. We propose a solution to this problem using navigation based on track density imaging extracted from high-definition fiber tractography (HDFT). A 28-year-old asymptomatic female patient with a left-lateral ventricle meningioma was enrolled in the present study. Language and visual tests, magnetic resonance imaging findings, both preoperative and postoperative HDFT, and the intraoperative navigation and surgery process are presented. Track density images were extracted from tracts derived using full q-space (514 directions) diffusion spectrum imaging (DSI) and integrated into a neuronavigation system. Navigation accuracy was verified via intraoperative records and postoperative DSI tractography, as well as a functional examination. DSI successfully represented the shape and range of the Meyer loop and arcuate fasciculus. Extracted track density images from the DSI were successfully integrated into the navigation system. The relationship between the operation channel and surrounding tracts was consistent with the postoperative findings, and the patient was functionally intact after the surgery. DSI-based TDI navigation allows for the visualization of anatomic features such as fanning and angling and helps to identify the range of a given tract. Moreover, our results show that our HDFT navigation method is a promising technique that preserves neural function. Copyright © 2016 Elsevier Inc. All rights reserved.
An object detection and tracking system for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao
2017-10-01
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging
NASA Astrophysics Data System (ADS)
Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.
2008-12-01
To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.
Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.
Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L
2008-12-21
To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.
von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.
2017-01-01
Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
Tracking and characterizing the head motion of unanaesthetized rats in positron emission tomography
Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger
2012-01-01
Positron emission tomography (PET) is an important in vivo molecular imaging technique for translational research. Imaging unanaesthetized rats using motion-compensated PET avoids the confounding impact of anaesthetic drugs and enables animals to be imaged during normal or evoked behaviour. However, there is little published data on the nature of rat head motion to inform the design of suitable marker-based motion-tracking set-ups for brain imaging—specifically, set-ups that afford close to uninterrupted tracking. We performed a systematic study of rat head motion parameters for unanaesthetized tube-bound and freely moving rats with a view to designing suitable motion-tracking set-ups in each case. For tube-bound rats, using a single appropriately placed binocular tracker, uninterrupted tracking was possible greater than 95 per cent of the time. For freely moving rats, simulations and measurements of a live subject indicated that two opposed binocular trackers are sufficient (less than 10% interruption to tracking) for a wide variety of behaviour types. We conclude that reliable tracking of head pose can be achieved with marker-based optical-motion-tracking systems for both tube-bound and freely moving rats undergoing PET studies without sedation. PMID:22718992
Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.
2014-01-01
Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Operation of a Cartesian Robotic System in a Compact Microscope with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Advanced autostereoscopic display for G-7 pilot project
NASA Astrophysics Data System (ADS)
Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi
1999-05-01
An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.
NASA Astrophysics Data System (ADS)
Duffy, M.; Richardson, T. J.; Craythorne, E.; Mallipeddi, R.; Coleman, A. J.
2014-02-01
A system has been developed to assess the feasibility of using motion tracking to enable pre-surgical margin mapping of basal cell carcinoma (BCC) in the clinic using optical coherence tomography (OCT). This system consists of a commercial OCT imaging system (the VivoSight 1500, MDL Ltd., Orpington, UK), which has been adapted to incorporate a webcam and a single-sensor electromagnetic positional tracking module (the Flock of Birds, Ascension Technology Corp, Vermont, USA). A supporting software interface has also been developed which allows positional data to be captured and projected onto a 2D dermoscopic image in real-time. Initial results using a stationary test phantom are encouraging, with maximum errors in the projected map in the order of 1-2mm. Initial clinical results were poor due to motion artefact, despite attempts to stabilise the patient. However, the authors present several suggested modifications that are expected to reduce the effects of motion artefact and improve the overall accuracy and clinical usability of the system.
Yang, Fan; Paindavoine, M
2003-01-01
This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.
LabVIEW application for motion tracking using USB camera
NASA Astrophysics Data System (ADS)
Rob, R.; Tirian, G. O.; Panoiu, M.
2017-05-01
The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.
Sword, Charles K.
2000-01-01
The present invention relates to an ultrasonic scanner system and method for the imaging of a part system, the scanner comprising: a probe assembly spaced apart from the surface of the part including at least two tracking signals for emitting radiation and a transmitter for emitting ultrasonic waves onto a surface in order to induce at least a portion of the waves to be reflected from the part, at least one detector for receiving the radiation wherein the detector is positioned to receive the radiation from the tracking signals, an analyzer for recognizing a three-dimensional location of the tracking signals based on the emitted radiation, a differential converter for generating an output signal representative of the waveform of the reflected waves, and a device such as a computer for relating said tracking signal location with the output signal and projecting an image of the resulting data. The scanner and method are particularly useful to acquire ultrasonic inspection data by scanning the probe over a complex part surface in an arbitrary scanning pattern.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
Fuzzy logic control for camera tracking system
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant
1992-01-01
A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Mutic, S; Du, D
Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported intomore » the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green receive travel reimbursement from ViewRay. S. Mutic has consulting and research agreements with ViewRay. Q. Zeng, R. Nana, J.L. Patrick, S. Shvartsman and J.F. Dempsey are ViewRay employees.« less
Cadaveric in-situ testing of optical coherence tomography system-based skull base surgery guidance
NASA Astrophysics Data System (ADS)
Sun, Cuiru; Khan, Osaama H.; Siegler, Peter; Jivraj, Jamil; Wong, Ronnie; Yang, Victor X. D.
2015-03-01
Optical Coherence Tomography (OCT) has extensive potential for producing clinical impact in the field of neurological diseases. A neurosurgical OCT hand-held forward viewing probe in Bayonet shape has been developed. In this study, we test the feasibility of integrating this imaging probe with modern navigation technology for guidance and monitoring of skull base surgery. Cadaver heads were used to simulate relevant surgical approaches for treatment of sellar, parasellar and skull base pathology. A high-resolution 3D CT scan was performed on the cadaver head to provide baseline data for navigation. The cadaver head was mounted on existing 3- or 4-point fixation systems. Tracking markers were attached to the OCT probe and the surgeon-probe-OCT interface was calibrated. 2D OCT images were shown in real time together with the optical tracking images to the surgeon during surgery. The intraoperative video and multimodality imaging data set, consisting of real time OCT images, OCT probe location registered to neurosurgical navigation were assessed. The integration of intraoperative OCT imaging with navigation technology provides the surgeon with updated image information, which is important to deal with tissue shifts and deformations during surgery. Preliminary results demonstrate that the clinical neurosurgical navigation system can provide the hand held OCT probe gross anatomical localization. The near-histological imaging resolution of intraoperative OCT can improve the identification of microstructural/morphology differences. The OCT imaging data, combined with the neurosurgical navigation tracking has the potential to improve image interpretation, precision and accuracy of the therapeutic procedure.
NASA Astrophysics Data System (ADS)
Dubuque, Shaun; Coffman, Thayne; McCarley, Paul; Bovik, A. C.; Thomas, C. William
2009-05-01
Foveated imaging has been explored for compression and tele-presence, but gaps exist in the study of foveated imaging applied to acquisition and tracking systems. Results are presented from two sets of experiments comparing simple foveated and uniform resolution targeting (acquisition and tracking) algorithms. The first experiments measure acquisition performance when locating Gabor wavelet targets in noise, with fovea placement driven by a mutual information measure. The foveated approach is shown to have lower detection delay than a notional uniform resolution approach when using video that consumes equivalent bandwidth. The second experiments compare the accuracy of target position estimates from foveated and uniform resolution tracking algorithms. A technique is developed to select foveation parameters that minimize error in Kalman filter state estimates. Foveated tracking is shown to consistently outperform uniform resolution tracking on an abstract multiple target task when using video that consumes equivalent bandwidth. Performance is also compared to uniform resolution processing without bandwidth limitations. In both experiments, superior performance is achieved at a given bandwidth by foveated processing because limited resources are allocated intelligently to maximize operational performance. These findings indicate the potential for operational performance improvements over uniform resolution systems in both acquisition and tracking tasks.
Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype
NASA Astrophysics Data System (ADS)
Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille
2012-06-01
The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.
Image-guided surgery and therapy: current status and future directions
NASA Astrophysics Data System (ADS)
Peters, Terence M.
2001-05-01
Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.
CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor
2004-05-01
Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-02-15
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications.
Person detection, tracking and following using stereo camera
NASA Astrophysics Data System (ADS)
Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping
2018-04-01
Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.
NASA Astrophysics Data System (ADS)
Misra, S. K.; Mukherjee, P.; Ohoka, A.; Schwartz-Duval, A. S.; Tiwari, S.; Bhargava, R.; Pan, D.
2016-01-01
Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles.Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles. Electronic supplementary information (ESI) available: Raman and confocal images of the Deuto-DOX-NPs in cells, materials and details of methods. See DOI: 10.1039/c5nr07975f
Zurauskas, Mantas; Bradu, Adrian; Ferguson, Daniel R; Hammer, Daniel X; Podoleanu, Adrian
2016-03-01
This paper presents a novel instrument for biosciences, useful for studies of moving embryos. A dual sequential imaging/measurement channel is assembled via a closed-loop tracking architecture. The dual channel system can operate in two regimes: (i) single-point Doppler signal monitoring or (ii) fast 3-D swept source OCT imaging. The system is demonstrated for characterizing cardiac dynamics in Drosophila melanogaster larva. Closed loop tracking enables long term in vivo monitoring of the larvae heart without anesthetic or physical restraint. Such an instrument can be used to measure subtle variations in the cardiac behavior otherwise obscured by the larvae movements. A fruit fly larva (top) was continuously tracked for continuous remote monitoring. A heartbeat trace of freely moving larva (bottom) was obtained by a low coherence interferometry based doppler sensing technique. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Active Multimodal Sensor System for Target Recognition and Tracking
Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen
2017-01-01
High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system. PMID:28657609
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Online geometrical calibration of a mobile C-arm using external sensors
NASA Astrophysics Data System (ADS)
Mitschke, Matthias M.; Navab, Nassir; Schuetz, Oliver
2000-04-01
3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.
Optical design of laser zoom projective lens with variable total track
NASA Astrophysics Data System (ADS)
He, Yulan; Xiao, Xiangguo; Lu, Feng; Li, Yuan; Han, Kunye; Wang, Nanxi; Qiang, Hua
2017-02-01
In order to project the laser command information to the proper distance , so a laser zoom projective lens with variable total track optical system is designed in the carrier-based aircraft landing system. By choosing the zoom structure, designing of initial structure with PW solution, correcting and balancing the aberration, a large variable total track with 35 × zoom is carried out. The size of image is invariable that is φ25m, the distance of projective image is variable from 100m to 3500m. Optical reverse design, the spot is less than 8μm, the MTF is near the diffraction limitation, the value of MTF is bigger than 0.4 at 50lp/mm.
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
Tracked 3D ultrasound in radio-frequency liver ablation
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.
2003-05-01
Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.
Single slice US-MRI registration for neurosurgical MRI-guided US
NASA Astrophysics Data System (ADS)
Pardasani, Utsav; Baxter, John S. H.; Peters, Terry M.; Khan, Ali R.
2016-03-01
Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration. Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool. To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI's Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.
Lynch, Adam E; Triajianto, Junian; Routledge, Edwin
2014-01-01
Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81 ± 0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17 ± 0.004 (MDA-MB-231 breast cancer cells), 1.24 ± 0.006 (SC5 mouse Sertoli cells) and 2.21 ± 0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers.
Lynch, Adam E.; Triajianto, Junian; Routledge, Edwin
2014-01-01
Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81±0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17±0.004 (MDA-MB-231 breast cancer cells), 1.24±0.006 (SC5 mouse Sertoli cells) and 2.21±0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers. PMID:25121722
Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system
NASA Astrophysics Data System (ADS)
Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio
2017-03-01
It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.
Single-chip microcomputer for image processing in the photonic measuring system
NASA Astrophysics Data System (ADS)
Smoleva, Olga S.; Ljul, Natalia Y.
2002-04-01
The non-contact measuring system has been designed for rail- track parameters control on the Moscow Metro. It detects some significant parameters: rail-track width, rail-track height, gage, rail-slums, crosslevel, pickets, and car speed. The system consists of three subsystems: non-contact system of rail-track width, height, and gage inspection, non-contact system of rail-slums inspection and subsystem for crosslevel, speed, and pickets detection. Data from subsystems is transferred to pre-processing unit. In order to process data received from subsystems, the single-chip signal processor ADSP-2185 must be used due to providing required processing speed. After data will be processed, it is send to PC, which processes it and outputs it in the readable form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on themore » target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). Conclusions: The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.« less
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
NASA Astrophysics Data System (ADS)
Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.
2016-03-01
Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Pline, Alexander D.
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Pline, Alexander D.
1991-01-01
The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.
Development of a multitarget tracking system for paramecia
NASA Astrophysics Data System (ADS)
Yeh, Yu-Sing; Huang, Ke-Nung; Jen, Sun-Lon; Li, Yan-Chay; Young, Ming-Shing
2010-07-01
This investigation develops a multitarget tracking system for the motile protozoa, paramecium. The system can recognize, track, and record the orbit of swimming paramecia within a 4 mm diameter of a circular experimental pool. The proposed system is implemented using an optical microscope, a charge-coupled device camera, and a software tool, Laboratory Virtual Instrumentation Engineering Workbench (LABVIEW). An algorithm for processing the images and analyzing the traces of the paramecia is developed in LABVIEW. It focuses on extracting meaningful data in an experiment and recording them to elucidate the behavior of paramecia. The algorithm can also continue to track paramecia even if they are transposed or collide with each other. The experiment demonstrates that this multitarget tracking design can really track more than five paramecia and simultaneously yield meaningful data from the moving paramecia at a maximum speed of 1.7 mm/s.
NASA Technical Reports Server (NTRS)
Agurok, Llya
2013-01-01
The Hyperspectral Imager-Tracker (HIT) is a technique for visualization and tracking of low-contrast, fast-moving objects. The HIT architecture is based on an innovative and only recently developed concept in imaging optics. This innovative architecture will give the Light Prescriptions Innovators (LPI) HIT the possibility of simultaneously collecting the spectral band images (hyperspectral cube), IR images, and to operate with high-light-gathering power and high magnification for multiple fast- moving objects. Adaptive Spectral Filtering algorithms will efficiently increase the contrast of low-contrast scenes. The most hazardous parts of a space mission are the first stage of a launch and the last 10 kilometers of the landing trajectory. In general, a close watch on spacecraft operation is required at distances up to 70 km. Tracking at such distances is usually associated with the use of radar, but its milliradian angular resolution translates to 100- m spatial resolution at 70-km distance. With sufficient power, radar can track a spacecraft as a whole object, but will not provide detail in the case of an accident, particularly for small debris in the onemeter range, which can only be achieved optically. It will be important to track the debris, which could disintegrate further into more debris, all the way to the ground. Such fragmentation could cause ballistic predictions, based on observations using high-resolution but narrow-field optics for only the first few seconds of the event, to be inaccurate. No optical imager architecture exists to satisfy NASA requirements. The HIT was developed for space vehicle tracking, in-flight inspection, and in the case of an accident, a detailed recording of the event. The system is a combination of five subsystems: (1) a roving fovea telescope with a wide 30 field of regard; (2) narrow, high-resolution fovea field optics; (3) a Coude optics system for telescope output beam stabilization; (4) a hyperspectral-mutispectral imaging assembly; and (5) image analysis software with effective adaptive spectral filtering algorithm for real-time contrast enhancement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Via, Riccardo, E-mail: riccardo.via@polimi.it; Fassi, Aurora; Fattori, Giovanni
Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by twomore » calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.« less
Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido
2015-05-01
External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Three-dimensional liver motion tracking using real-time two-dimensional MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild
2014-04-15
Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less
Suppression of fixed pattern noise for infrared image system
NASA Astrophysics Data System (ADS)
Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon
2008-04-01
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
The Track Imaging Cerenkov Experiment
NASA Technical Reports Server (NTRS)
Wissel, S. A.; Byrum, K.; Cunningham, J. D.; Drake, G.; Hays, E.; Horan, D.; Kieda, D.; Kovacs, E.; Magill, S.; Nodulman, L.;
2011-01-01
We describe a. dedicated cosmic-ray telescope that explores a new method for detecting Cerenkov radiation from high-energy primary cosmic rays and the large particle air shower they induce upon entering the atmosphere. Using a camera comprising 16 multi-anode photomultiplier tubes for a total of 256 pixels, the Track Imaging Cerenkov Experiment (TrICE) resolves substructures in particle air showers with 0,086 deg resolution. Cerenkov radiation is imaged using a novel two-part optical system in which a Fresnel lens provides a wide-field optical trigger and a mirror system collects delayed light with four times the magnification. TrICE records well-resolved cosmic-ray air showers at rates ranging between 0.01-0.1 Hz.
A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).
Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A
2013-01-01
The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Fang, Ting Yun; Finocchi, Rodolfo; Boctor, Emad M.
2017-03-01
Three dimensional (3D) ultrasound imaging is becoming a standard mode for medical ultrasound diagnoses. Conventional 3D ultrasound imaging is mostly scanned either by using a two dimensional matrix array or by motorizing a one dimensional array in the elevation direction. However, the former system is not widely assessable due to its cost, and the latter one has limited resolution and field-of-view in the elevation axis. Here, we propose a 3D ultrasound imaging system based on the synthetic tracked aperture approach, in which a robotic arm is used to provide accurate tracking and motion. While the ultrasound probe is moved by a robotic arm, each probe position is tracked and can be used to reconstruct a wider field-of-view as there are no physical barriers that restrict the elevational scanning. At the same time, synthetic aperture beamforming provides a better resolution in the elevation axis. To synthesize the elevational information, the single focal point is regarded as the virtual element, and forward and backward delay-andsum are applied to the radio-frequency (RF) data collected through the volume. The concept is experimentally validated using a general ultrasound phantom, and the elevational resolution improvement of 2.54 and 2.13 times was measured at the target depths of 20 mm and 110 mm, respectively.
An optical processor for object recognition and tracking
NASA Technical Reports Server (NTRS)
Sloan, J.; Udomkesmalee, S.
1987-01-01
The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muralidhar, K Raja; Pangam, Suresh; Ponaganti, Srinivas
2016-06-15
Purpose: 1. online verification of patient position during treatment using calypso electromagnetic localization and tracking system. 2. Verification and comparison of positional accuracy between cone beam computed tomography and calypso system. 3. Presenting the advantage of continuation localization in Stereotactic radiosurgery treatments. Methods: Ten brain tumor cases were taken for this study. Patients with head mask were under gone Computed Tomography (CT). Before scanning, mask was cut on the fore head area to keep surface beacons on the skin. Slice thickness of 0.65 mm were taken for this study. x, y, z coordinates of these beacons in TPS were enteredmore » into tracking station. Varian True Beam accelerator, equipped with On Board Imager was used to take Cone beam Computed Tomography (CBCT) to localize the patient. Simultaneously Surface beacons were used to localize and track the patient throughout the treatment. The localization values were compared in both systems. For localization CBCT considered as reference. Tracking was done throughout the treatment using Calypso tracking system using electromagnetic array. This array was in tracking position during imaging and treatment. Flattening Filter free beams of 6MV photons along with Volumetric Modulated Arc Therapy was used for the treatment. The patient movement was observed throughout the treatment ranging from 2 min to 4 min. Results: The average variation observed between calypso system and CBCT localization was less than 0.5 mm. These variations were due to manual errors while keeping beacon on the patient. Less than 0.05 cm intra-fraction motion was observed throughout the treatment with the help of continuous tracking. Conclusion: Calypso target localization system is one of the finest tools to perform radiosurgery in combination with CBCT. This non radiographic method of tracking is a real beneficial method to treat patients confidently while observing real-time motion information of the patient.« less
Direction sensitive neutron detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlen, Steven; Fisher, Peter; Dujmic, Denis
2017-01-31
A neutron detector includes a pressure vessel, an electrically conductive field cage assembly within the pressure vessel and an imaging subsystem. A pressurized gas mixture of CF.sub.4, .sup.3He and .sup.4He at respective partial pressures is used. The field cage establishes a relatively large drift region of low field strength, in which ionization electrons generated by neutron-He interactions are directed toward a substantially smaller amplification region of substantially higher field strength in which the ionization electrons undergo avalanche multiplication resulting in scintillation of the CF.sub.4 along scintillation tracks. The imaging system generates two-dimensional images of the scintillation patterns and employs track-findingmore » to identify tracks and deduce the rate and direction of incident neutrons. One or more photo-multiplier tubes record the time-profile of the scintillation tracks permitting the determination of the third coordinate.« less
Fuzzy logic particle tracking velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1993-01-01
Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.
A new method for tracking organ motion on diagnostic ultrasound images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Yoshiki, E-mail: y-kubota@gunma-u.ac.jp; Matsumura, Akihiko, E-mail: matchan.akihiko@gunma-u.ac.jp; Fukahori, Mai, E-mail: fukahori@nirs.go.jp
2014-09-15
Purpose: Respiratory-gated irradiation is effective in reducing the margins of a target in the case of abdominal organs, such as the liver, that change their position as a result of respiratory motion. However, existing technologies are incapable of directly measuring organ motion in real-time during radiation beam delivery. Hence, the authors proposed a novel quantitative organ motion tracking method involving the use of diagnostic ultrasound images; it is noninvasive and does not entail radiation exposure. In the present study, the authors have prospectively evaluated this proposed method. Methods: The method involved real-time processing of clinical ultrasound imaging data rather thanmore » organ monitoring; it comprised a three-dimensional ultrasound device, a respiratory sensing system, and two PCs for data storage and analysis. The study was designed to evaluate the effectiveness of the proposed method by tracking the gallbladder in one subject and a liver vein in another subject. To track a moving target organ, the method involved the control of a region of interest (ROI) that delineated the target. A tracking algorithm was used to control the ROI, and a large number of feature points and an error correction algorithm were used to achieve long-term tracking of the target. Tracking accuracy was assessed in terms of how well the ROI matched the center of the target. Results: The effectiveness of using a large number of feature points and the error correction algorithm in the proposed method was verified by comparing it with two simple tracking methods. The ROI could capture the center of the target for about 5 min in a cross-sectional image with changing position. Indeed, using the proposed method, it was possible to accurately track a target with a center deviation of 1.54 ± 0.9 mm. The computing time for one frame image using our proposed method was 8 ms. It is expected that it would be possible to track any soft-tissue organ or tumor with large deformations and changing cross-sectional position using this method. Conclusions: The proposed method achieved real-time processing and continuous tracking of the target organ for about 5 min. It is expected that our method will enable more accurate radiation treatment than is the case using indirect observational methods, such as the respiratory sensor method, because of direct visualization of the tumor. Results show that this tracking system facilitates safe treatment in clinical practice.« less
NASA Astrophysics Data System (ADS)
Ma, Kevin C.; Forsyth, Sydney; Amezcua, Lilyana; Liu, Brent J.
2017-03-01
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results to allow patient tracking. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system quantifies lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. We aim to evaluate the two most important features of the system, data mining and longitudinal lesion tracking, to demonstrate the MS eFolder's capability in improving clinical workflow efficiency and outcome analysis for research. In order to evaluate data mining capabilities, we have collected radiological and neurological data from 72 patients, 36 Caucasian and 36 Hispanic matched by gender, disease duration, and age. Data analysis on those patients based on ethnicity is performed, and analysis results are displayed by the system's web-based user interface. The data mining module is able to successfully separate Hispanic and Caucasian patients and compare their disease profiles. For longitudinal lesion tracking, we have collected 4 longitudinal cases and simulated different lesion growths over the next year. As a result, the eFolder is able to detect changes in lesion volume and identifying lesions with the most changes. Data mining and lesion tracking evaluation results show high potential of eFolder's usefulness in patientcare and informatics research for multiple sclerosis.
Seslija, Petar; Teeter, Matthew G; Yuan, Xunhua; Naudie, Douglas D R; Bourne, Robert B; Macdonald, Steven J; Peters, Terry M; Holdsworth, David W
2012-10-01
The ability to accurately measure joint kinematics is an important tool in studying both normal joint function and pathologies associated with injury and disease. The purpose of this study is to evaluate the efficacy, accuracy, precision, and clinical safety of measuring 3D joint motion using a conventional flat-panel radiography system prior to its application in an in vivo study. An automated, image-based tracking algorithm was implemented to measure the three-dimensional pose of a sparse object from a two-dimensional radiographic projection. The algorithm was tested to determine its efficiency and failure rate, defined as the number of image frames where automated tracking failed, or required user intervention. The accuracy and precision of measuring three-dimensional motion were assessed using a robotic controlled, tibiofemoral knee phantom programmed to mimic a subject with a total knee replacement performing a stair ascent activity. Accuracy was assessed by comparing the measurements of the single-plane radiographic tracking technique to those of an optical tracking system, and quantified by the measurement discrepancy between the two systems using the Bland-Altman technique. Precision was assessed through a series of repeated measurements of the tibiofemoral kinematics, and was quantified using the across-trial deviations of the repeated kinematic measurements. The safety of the imaging procedure was assessed by measuring the effective dose of ionizing radiation associated with the x-ray exposures, and analyzing its relative risk to a human subject. The automated tracking algorithm displayed a failure rate of 2% and achieved an average computational throughput of 8 image frames/s. Mean differences between the radiographic and optical measurements for translations and rotations were less than 0.08 mm and 0.07° in-plane, and 0.24 mm and 0.6° out-of-plane. The repeatability of kinematics measurements performed using the radiographic tracking technique was better than ±0.09 mm and 0.12° in-plane, and ±0.70 mm and ±0.07° out-of-plane. The effective dose associated with the imaging protocol used was 15 μSv for 10 s of radiographic cine acquisition. This study demonstrates the ability to accurately measure knee-joint kinematics using a single-plane radiographic measurement technique. The measurement technique can be easily implemented at most clinical centers equipped with a modern-day radiographic x-ray system. The dose of ionizing radiation associated with the image acquisition represents a minimal risk to any subjects undergoing the examination.
A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.
Jiang, J; Hall, T J
2007-07-07
Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.
Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images
NASA Astrophysics Data System (ADS)
Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian
2015-12-01
With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.
Nakamura, Mitsuhiro; Sawada, Akira; Mukumoto, Nobutaka; Takahashi, Kunio; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro
2013-09-06
The Vero4DRT (MHI-TM2000) is capable of performing X-ray image-based tracking (X-ray Tracking) that directly tracks the target or fiducial markers under continuous kV X-ray imaging. Previously, we have shown that irregular respiratory patterns increased X-ray Tracking errors. Thus, we assumed that audio instruction, which generally improves the periodicity of respiration, should reduce tracking errors. The purpose of this study was to assess the effect of audio instruction on X-ray Tracking errors. Anterior-posterior abdominal skin-surface displacements obtained from ten lung cancer patients under free breathing and simple audio instruction were used as an alternative to tumor motion in the superior-inferior direction. First, a sequential predictive model based on the Levinson-Durbin algorithm was created to estimate the future three-dimensional (3D) target position under continuous kV X-ray imaging while moving a steel ball target of 9.5 mm in diameter. After creating the predictive model, the future 3D target position was sequentially calculated from the current and past 3D target positions based on the predictive model every 70 ms under continuous kV X-ray imaging. Simultaneously, the system controller of the Vero4DRT calculated the corresponding pan and tilt rotational angles of the gimbaled X-ray head, which then adjusted its orientation to the target. The calculated and current rotational angles of the gimbaled X-ray head were recorded every 5 ms. The target position measured by the laser displacement gauge was synchronously recorded every 10 msec. Total tracking system errors (ET) were compared between free breathing and audio instruction. Audio instruction significantly improved breathing regularity (p < 0.01). The mean ± standard deviation of the 95th percentile of ET (E95T ) was 1.7 ± 0.5 mm (range: 1.1-2.6mm) under free breathing (E95T,FB) and 1.9 ± 0.5 mm (range: 1.2-2.7 mm) under audio instruction (E95T,AI). E95T,AI was larger than E95T,FB for five patients; no significant difference was found between E95T,FB and E95T,AI (p = 0.21). Correlation analysis revealed that the rapid respiratory velocity significantly increased E95T. Although audio instruction improved breathing regularity, it also increased the respiratory velocity, which did not necessarily reduce tracking errors.
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
PLUS: open-source toolkit for ultrasound-guided intervention systems.
Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor
2014-10-01
A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.
Development of a real time multiple target, multi camera tracker for civil security applications
NASA Astrophysics Data System (ADS)
Åkerlund, Hans
2009-09-01
A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
NASA Astrophysics Data System (ADS)
Ziock, K. P.; Boehnen, C. B.; Ernst, J. M.; Fabris, L.; Hayward, J. P.; Karnowski, T. P.; Paquit, V. C.; Patlolla, D. R.; Trombino, D. G.
2016-01-01
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. The complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
Ziock, Klaus -Peter; Boehnen, Chris Bensing; Ernst, Joseph M.; ...
2015-09-05
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combinedmore » gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. Here,the complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.« less
The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality.
Watanabe, Eiju; Satoh, Makoto; Konno, Takehiko; Hirai, Masahiro; Yamaguchi, Takashi
2016-03-01
The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging
Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.
2013-01-01
A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148
Hossack, John A; Sumanaweera, Thilaka S; Napel, Sandy; Ha, Jun S
2002-08-01
An approach for acquiring dimensionally accurate three-dimensional (3-D) ultrasound data from multiple 2-D image planes is presented. This is based on the use of a modified linear-phased array comprising a central imaging array that acquires multiple, essentially parallel, 2-D slices as the transducer is translated over the tissue of interest. Small, perpendicularly oriented, tracking arrays are integrally mounted on each end of the imaging transducer. As the transducer is translated in an elevational direction with respect to the central imaging array, the images obtained by the tracking arrays remain largely coplanar. The motion between successive tracking images is determined using a minimum sum of absolute difference (MSAD) image matching technique with subpixel matching resolution. An initial phantom scanning-based test of a prototype 8 MHz array indicates that linear dimensional accuracy of 4.6% (2 sigma) is achievable. This result compares favorably with those obtained using an assumed average velocity [31.5% (2 sigma) accuracy] and using an approach based on measuring image-to-image decorrelation [8.4% (2 sigma) accuracy]. The prototype array and imaging system were also tested in a clinical environment, and early results suggest that the approach has the potential to enable a low cost, rapid, screening method for detecting carotid artery stenosis. The average time for performing a screening test for carotid stenosis was reduced from an average of 45 minutes using 2-D duplex Doppler to 12 minutes using the new 3-D scanning approach.
Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun
2018-05-19
Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.
Ma, Kevin C; Fernandez, James R; Amezcua, Lilyana; Lerner, Alex; Shiroishi, Mark S; Liu, Brent J
2015-12-01
MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also includes a data mining tool to handle unique search queries for MS. System workflow and dataflow steps has been designed based on the IHE post-processing workflow profile, including workflow process tracking, MS lesion contouring and quantification of MR images at a post-processing workstation, and storage of quantitative results as DICOM-SR in DICOM-based storage system. The web-based GUI is designed to display zero-footprint DICOM web-accessible data objects (WADO) and the SR objects. The MS eFolder system has been designed and developed as an integrated data storage and mining solution in both clinical and research environments, while providing unique features, such as quantitative lesion analysis and disease tracking over a longitudinal study. A comprehensive image and clinical data integrated database provided by MS eFolder provides a platform for treatment assessment, outcomes analysis and decision-support. The proposed system serves as a platform for future quantitative analysis derived automatically from CAD algorithms that can also be integrated within the system for individual disease tracking and future MS-related research. Ultimately the eFolder provides a decision-support infrastructure that can eventually be used as add-on value to the overall electronic medical record. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ma, Kevin C.; Fernandez, James R.; Amezcua, Lilyana; Lerner, Alex; Shiroishi, Mark S.; Liu, Brent J.
2016-01-01
Purpose MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. Method An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also includes a data mining tool to handle unique search queries for MS. System workflow and dataflow steps has been designed based on the IHE post-processing workflow profile, including workflow process tracking, MS lesion contouring and quantification of MR images at a post-processing workstation, and storage of quantitative results as DICOM-SR in DICOM-based storage system. The web-based GUI is designed to display zero-footprint DICOM web-accessible data objects (WADO) and the SR objects. Summary The MS eFolder system has been designed and developed as an integrated data storage and mining solution in both clinical and research environments, while providing unique features, such as quantitative lesion analysis and disease tracking over a longitudinal study. A comprehensive image and clinical data integrated database provided by MS eFolder provides a platform for treatment assessment, outcomes analysis and decision-support. The proposed system serves as a platform for future quantitative analysis derived automatically from CAD algorithms that can also be integrated within the system for individual disease tracking and future MS-related research. Ultimately the eFolder provides a decision-support infrastructure that can eventually be used as add-on value to the overall electronic medical record. PMID:26564667
NASA Astrophysics Data System (ADS)
Chamma, Emilie; Qiu, Jimmy; Lindvere-Teene, Liis; Blackmore, Kristina M.; Majeed, Safa; Weersink, Robert; Dickie, Colleen I.; Griffin, Anthony M.; Wunder, Jay S.; Ferguson, Peter C.; DaCosta, Ralph S.
2015-07-01
Standard clinical management of extremity soft tissue sarcomas includes surgery with radiation therapy. Wound complications (WCs) arising from treatment may occur due to bacterial infection and tissue breakdown. The ability to detect changes in these parameters during treatment may lead to earlier interventions that mitigate WCs. We describe the use of a new system composed of an autofluorescence imaging device and an optical three-dimensional tracking system to detect and coregister the presence of bacteria with radiation doses. The imaging device visualized erythema using white light and detected bacterial autofluorescence using 405-nm excitation light. Its position was tracked relative to the patient using IR reflective spheres and registration to the computed tomography coordinates. Image coregistration software was developed to spatially overlay radiation treatment plans and dose distributions on the white light and autofluorescence images of the surgical site. We describe the technology, its use in the operating room, and standard operating procedures, as well as demonstrate technical feasibility and safety intraoperatively. This new clinical tool may help identify patients at greater risk of developing WCs and investigate correlations between radiation dose, skin response, and changes in bacterial load as biomarkers associated with WCs.
A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.
Yao, Libo; Liu, Yong; He, You
2018-06-22
The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.
White matter fiber tracking computation based on diffusion tensor imaging for clinical applications.
Dellani, Paulo R; Glaser, Martin; Wille, Paulo R; Vucurevic, Goran; Stadie, Axel; Bauermann, Thomas; Tropine, Andrei; Perneczky, Axel; von Wangenheim, Aldo; Stoeter, Peter
2007-03-01
Fiber tracking allows the in vivo reconstruction of human brain white matter fiber trajectories based on magnetic resonance diffusion tensor imaging (MR-DTI), but its application in the clinical routine is still in its infancy. In this study, we present a new software for fiber tracking, developed on top of a general-purpose DICOM (digital imaging and communications in medicine) framework, which can be easily integrated into existing picture archiving and communication system (PACS) of radiological institutions. Images combining anatomical information and the localization of different fiber tract trajectories can be encoded and exported in DICOM and Analyze formats, which are valuable resources in the clinical applications of this method. Fiber tracking was implemented based on existing line propagation algorithms, but it includes a heuristic for fiber crossings in the case of disk-shaped diffusion tensors. We successfully performed fiber tracking on MR-DTI data sets from 26 patients with different types of brain lesions affecting the corticospinal tracts. In all cases, the trajectories of the central spinal tract (pyramidal tract) were reconstructed and could be applied at the planning phase of the surgery as well as in intraoperative neuronavigation.
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J.; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-01-01
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the “integrated image” on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. PMID:28198442
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space
NASA Astrophysics Data System (ADS)
Jun, Chen; Wenjun, Hou; Qing, Sheng
After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.
Automated tracking of a figure skater by using PTZ cameras
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
2009-08-01
In this paper, a system for automated real-time tracking of a figure skater moving on an ice rink by using PTZ cameras is presented. This system is intended for support in training of skating, for example, as a tool for recording and evaluation of his/her motion performances. In the processing procedure of the system, an ice rink region is extracted first from a video image by region growing method, then one of hole components in the obtained rink region is extracted as a skater region. If there exists no hole component, a skater region is estimated from horizontal and vertical intensity projections of the rink region. Each camera is automatically panned and/or tilted so as to keep the skater region on almost the center of the image, and also zoomed so as to keep the height of the skater region within an appropriate range. In the experiments using 5 practical video images of skating, it was shown that the extraction rate of the skater region was almost 90%, and tracking with camera control was successfully done for almost all of the cases used here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ipsen, S; Bruder, R; Schweikard, A
Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker wasmore » rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the potential to measure target motion non-invasively. SI was supported by the Graduate School for Computing in Medicine and Life Science, German Excellence Initiative [grant DFG GSC 235/1].« less
Video guidance, landing, and imaging systems
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.
1975-01-01
The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.
Prototype of a single probe Compton camera for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.
2017-02-01
Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.
Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish.
Kim, Dal Hyung; Kim, Jungsoo; Marques, João C; Grama, Abhinav; Hildebrand, David G C; Gu, Wenchao; Li, Jennifer M; Robson, Drew N
2017-11-01
Calcium imaging with cellular resolution typically requires an animal to be tethered under a microscope, which substantially restricts the range of behaviors that can be studied. To expand the behavioral repertoire amenable to imaging, we have developed a tracking microscope that enables whole-brain calcium imaging with cellular resolution in freely swimming larval zebrafish. This microscope uses infrared imaging to track a target animal in a behavior arena. On the basis of the predicted trajectory of the animal, we applied optimal control theory to a motorized stage system to cancel brain motion in three dimensions. We combined this motion-cancellation system with differential illumination focal filtering, a variant of HiLo microscopy, which enabled us to image the brain of a freely swimming larval zebrafish for more than an hour. This work expands the repertoire of natural behaviors that can be studied with cellular-resolution calcium imaging to potentially include spatial navigation, social behavior, feeding and reward.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Schneider, Armin; Feussner, Hubertus; Navabi, Nassir
2007-01-01
In abdominal surgery, a laparoscopic ultrasound transducer is commonly used to detect lesions such as metastases. The determination and visualization of position and orientation of its flexible tip in relation to the patient or other surgical instruments can be of much help to (novice) surgeons utilizing the transducer intraoperatively. This difficult subject has recently been paid attention to by the scientific community . Electromagnetic tracking systems can be applied to track the flexible tip. However, the magnetic field can be distorted by ferromagnetic material. This paper presents a new method based on optical tracking of the laparoscope and magneto-optic tracking of the transducer, which is able to automatically detect field distortions. This is used for a smooth augmentation of the B-scan images of the transducer directly on the camera images in real time.
NASA Astrophysics Data System (ADS)
Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu
2013-06-01
Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, B.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, D.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berbeco, R.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keall, P.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-10-29
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.
Search Radar Track-Before-Detect Using the Hough Transform.
1995-03-01
before - detect processing method which allows previous data to help in target detection. The technique provides many advantages compared to...improved target detection scheme, applicable to search radars, using the Hough transform image processing technique. The system concept involves a track
FIMTrack: An open source tracking and locomotion analysis software for small animals.
Risse, Benjamin; Berh, Dimitri; Otto, Nils; Klämbt, Christian; Jiang, Xiaoyi
2017-05-01
Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
Navigation with Electromagnetic Tracking for Interventional Radiology Procedures
Wood, Bradford J.; Zhang, Hui; Durrani, Amir; Glossop, Neil; Ranjan, Sohan; Lindisch, David; Levy, Eliott; Banovac, Filip; Borgert, Joern; Krueger, Sascha; Kruecker, Jochen; Viswanathan, Anand; Cleary, Kevin
2008-01-01
PURPOSE To assess the feasibility of the use of preprocedural imaging for guide wire, catheter, and needle navigation with electromagnetic tracking in phantom and animal models. MATERIALS AND METHODS An image-guided intervention software system was developed based on open-source software components. Catheters, needles, and guide wires were constructed with small position and orientation sensors in the tips. A tetrahedral-shaped weak electromagnetic field generator was placed in proximity to an abdominal vascular phantom or three pigs on the angiography table. Preprocedural computed tomographic (CT) images of the phantom or pig were loaded into custom-developed tracking, registration, navigation, and rendering software. Devices were manipulated within the phantom or pig with guidance from the previously acquired CT scan and simultaneous real-time angiography. Navigation within positron emission tomography (PET) and magnetic resonance (MR) volumetric datasets was also performed. External and endovascular fiducials were used for registration in the phantom, and registration error and tracking error were estimated. RESULTS The CT scan position of the devices within phantoms and pigs was accurately determined during angiography and biopsy procedures, with manageable error for some applications. Preprocedural CT depicted the anatomy in the region of the devices with real-time position updating and minimal registration error and tracking error (<5 mm). PET can also be used with this system to guide percutaneous biopsies to the most metabolically active region of a tumor. CONCLUSIONS Previously acquired CT, MR, or PET data can be accurately codisplayed during procedures with reconstructed imaging based on the position and orientation of catheters, guide wires, or needles. Multimodality interventions are feasible by allowing the real-time updated display of previously acquired functional or morphologic imaging during angiography, biopsy, and ablation. PMID:15802449
Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S.
2014-01-01
This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N. PMID:25126446
Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S
2013-01-01
This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.
Li, Mengfei; Hansen, Christian; Rose, Georg
2017-09-01
Electromagnetic tracking systems (EMTS) have achieved a high level of acceptance in clinical settings, e.g., to support tracking of medical instruments in image-guided interventions. However, tracking errors caused by movable metallic medical instruments and electronic devices are a critical problem which prevents the wider application of EMTS for clinical applications. We plan to introduce a method to dynamically reduce tracking errors caused by metallic objects in proximity to the magnetic sensor coil of the EMTS. We propose a method using ramp waveform excitation based on modeling the conductive distorter as a resistance-inductance circuit. Additionally, a fast data acquisition method is presented to speed up the refresh rate. With the current approach, the sensor's positioning mean error is estimated to be 3.4, 1.3 and 0.7 mm, corresponding to a distance between the sensor and center of the transmitter coils' array of up to 200, 150 and 100 mm, respectively. The sensor pose error caused by different medical instruments placed in proximity was reduced by the proposed method to a level lower than 0.5 mm in position and [Formula: see text] in orientation. By applying the newly developed fast data acquisition method, we achieved a system refresh rate up to approximately 12.7 frames per second. Our software-based approach can be integrated into existing medical EMTS seamlessly with no change in hardware. It improves the tracking accuracy of clinical EMTS when there is a metallic object placed near the sensor coil and has the potential to improve the safety and outcome of image-guided interventions.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
Real-time automatic fiducial marker tracking in low contrast cine-MV images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang
2013-01-15
Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less
Real-time landmark-based unrestrained animal tracking system for motion-corrected PET/SPECT imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.S. Goddard; S.S. Gleason; M.J. Paulus
2003-08-01
Oak Ridge National Laboratory (ORNL) and Jefferson Lab and are collaborating to develop a new high-resolution single photon emission tomography (SPECT) instrument to image unrestrained laboratory animals. This technology development will allow functional imaging studies to be performed on the animals without the use of anesthetic agents. This technology development could have eventual clinical applications for performing functional imaging studies on patients that cannot remain still (Parkinson's patients, Alzheimer's patients, small children, etc.) during a PET or SPECT scan. A key component of this new device is the position tracking apparatus. The tracking apparatus is an integral part of themore » gantry and designed to measure the spatial position of the animal at a rate of 10-15 frames per second with sub-millimeter accuracy. Initial work focuses on brain studies where anesthetic agents or physical restraint can significantly impact physiologic processes.« less
Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian
2018-04-03
Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael T; Assad, Christopher; Kuwata, Yoshiaki; Howard, Andrew; Aghazarian, Hrand; Zhu, David; Lu, Thomas; Trebi-Ollennu, Ashitey; Huntsberger, Terry
2010-01-01
This paper describes perception and planning systems of an autonomous sea surface vehicle (ASV) whose goal is to detect and track other vessels at medium to long ranges and execute responses to determine whether the vessel is adversarial. The Jet Propulsion Laboratory (JPL) has developed a tightly integrated system called CARACaS (Control Architecture for Robotic Agent Command and Sensing) that blends the sensing, planning, and behavior autonomy necessary for such missions. Two patrol scenarios are addressed here: one in which the ASV patrols a large harbor region and checks for vessels near a fixed asset on each pass and one in which the ASV circles a fixed asset and intercepts approaching vessels. This paper focuses on the ASV's central perception and situation awareness system, dubbed Surface Autonomous Visual Analysis and Tracking (SAVAnT), which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects' presence over time, even as they may exist outside of the vehicle's sensor range. The integrated CARACaS/SAVAnT system has been implemented on U.S. Navy experimental ASVs and tested in on-water field demonstrations.
Stereo Electro-optical Tracking System (SETS)
NASA Astrophysics Data System (ADS)
Koenig, E. W.
1984-09-01
The SETS is a remote, non-contacting, high-accuracy tracking system for the measurement of deflection of models in the National Transonic Facility at Langley Research Center. The system consists of four electronically scanned image dissector trackers which locate the position of Light Emitting Diodes embedded in the wing or body of aircraft models. Target location data is recorded on magnetic tape for later 3-D processing. Up to 63 targets per model may be tracked at typical rates of 1280 targets per second and to precision of 0.02mm at the target under the cold (-193 C) environment of the NTF tunnel.
NASA Astrophysics Data System (ADS)
Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca
2014-07-01
Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.
Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.
Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong
2016-08-01
The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.
MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J
2013-03-01
The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J
2014-09-26
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.
2014-01-01
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956
Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters
NASA Astrophysics Data System (ADS)
Zuniga Zamalloa, C. C.; Landry, B. J.
2017-12-01
The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.
NASA Astrophysics Data System (ADS)
Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.
2014-03-01
In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.
Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera
NASA Astrophysics Data System (ADS)
Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.
2004-01-01
We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.
Stabilized display of coronary x-ray image sequences
NASA Astrophysics Data System (ADS)
Close, Robert A.; Whiting, James S.; Da, Xiaolin; Eigler, Neal L.
2004-05-01
Display stabilization is a technique by which a feature of interest in a cine image sequence is tracked and then shifted to remain approximately stationary on the display device. Prior simulations indicate that display stabilization with high playback rates ( 30 f/s) can significantly improve detectability of low-contrast features in coronary angiograms. Display stabilization may also help to improve the accuracy of intra-coronary device placement. We validated our automated tracking algorithm by comparing the inter-frame difference (jitter) between manual and automated tracking of 150 coronary x-ray image sequences acquired on a digital cardiovascular X-ray imaging system with CsI/a-Si flat panel detector. We find that the median (50%) inter-frame jitter between manual and automatic tracking is 1.41 pixels or less, indicating a jump no further than an adjacent pixel. This small jitter implies that automated tracking and manual tracking should yield similar improvements in the performance of most visual tasks. We hypothesize that cardiologists would perceive a benefit in viewing the stabilized display as an addition to the standard playback of cine recordings. A benefit of display stabilization was identified in 87 of 101 sequences (86%). The most common tasks cited were evaluation of stenosis and determination of stent and balloon positions. We conclude that display stabilization offers perceptible improvements in the performance of visual tasks by cardiologists.
NASA Astrophysics Data System (ADS)
Morishima, Shigeo; Nakamura, Satoshi
2004-12-01
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Ultrasound Imaging in Radiation Therapy: From Interfractional to Intrafractional Guidance
Western, Craig; Hristov, Dimitre
2015-01-01
External beam radiation therapy (EBRT) is included in the treatment regimen of the majority of cancer patients. With the proliferation of hypofractionated radiotherapy treatment regimens, such as stereotactic body radiation therapy (SBRT), interfractional and intrafractional imaging technologies are becoming increasingly critical to ensure safe and effective treatment delivery. Ultrasound (US)-based image guidance systems offer real-time, markerless, volumetric imaging with excellent soft tissue contrast, overcoming the limitations of traditional X-ray or computed tomography (CT)-based guidance for abdominal and pelvic cancer sites, such as the liver and prostate. Interfractional US guidance systems have been commercially adopted for patient positioning but suffer from systematic positioning errors induced by probe pressure. More recently, several research groups have introduced concepts for intrafractional US guidance systems leveraging robotic probe placement technology and real-time soft tissue tracking software. This paper reviews various commercial and research-level US guidance systems used in radiation therapy, with an emphasis on hardware and software technologies that enable the deployment of US imaging within the radiotherapy environment and workflow. Previously unpublished material on tissue tracking systems and robotic probe manipulators under development by our group is also included. PMID:26180704
Servat, Juan J; Elia, Maxwell Dominic; Gong, Dan; Manes, R Peter; Black, Evan H; Levin, Flora
2014-12-01
To assess the feasibility of routine use of electromagnetic image guidance systems in orbital decompression. Six consecutive patients underwent stereotactic-guided three wall orbital decompression using the novel Fusion ENT Navigation System (Medtronic), a portable and expandable electromagnetic guidance system with multi-instrument tracking capabilities. The system consists of the Medtronic LandmarX System software-enabled computer station, signal generator, field-generating magnet, head-mounted marker coil, and surgical tracking instruments. In preparation for use of the LandmarX/Fusion protocol, all patients underwent preoperative non-contrast CT scan from the superior aspect of the frontal sinuses to the inferior aspect of the maxillary sinuses that includes the nasal tip. The Fusion ENT Navigation System (Medtronic™) was used in 6 patients undergoing maximal 3-wall orbital decompression for Graves' orbitopthy after a minimum of six months of disease inactivity. Preoperative Hertel exophthalmometry measured more than 27 mm in all patients. The navigation system proved to be no more difficult technically than the traditional orbital decompression approach. Electromagnetic image guidance is a stereotactic surgical navigation system that provides additional intraoperative flexibility in orbital surgery. Electromagnetic image-guidance offers the ability to perform more aggressive orbital decompressions with reduced risk.
A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging
NASA Astrophysics Data System (ADS)
Jiang, J.; Hall, T. J.
2007-07-01
Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.
Real-time self-calibration of a tracked augmented reality display
NASA Astrophysics Data System (ADS)
Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2016-03-01
PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Thermal bioaerosol cloud tracking with Bayesian classification
NASA Astrophysics Data System (ADS)
Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.
2017-05-01
The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.
Software components for medical image visualization and surgical planning
NASA Astrophysics Data System (ADS)
Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.
2001-05-01
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
Intra-coil interactions in split gradient coils in a hybrid MRI-LINAC system
NASA Astrophysics Data System (ADS)
Tang, Fangfang; Freschi, Fabio; Sanchez Lopez, Hector; Repetto, Maurizio; Liu, Feng; Crozier, Stuart
2016-04-01
An MRI-LINAC system combines a magnetic resonance imaging (MRI) system with a medical linear accelerator (LINAC) to provide image-guided radiotherapy for targeting tumors in real-time. In an MRI-LINAC system, a set of split gradient coils is employed to produce orthogonal gradient fields for spatial signal encoding. Owing to this unconventional gradient configuration, eddy currents induced by switching gradient coils on and off may be of particular concern. It is expected that strong intra-coil interactions in the set will be present due to the constrained return paths, leading to potential degradation of the gradient field linearity and image distortion. In this study, a series of gradient coils with different track widths have been designed and analyzed to investigate the electromagnetic interactions between coils in a split gradient set. A driving current, with frequencies from 100 Hz to 10 kHz, was applied to study the inductive coupling effects with respect to conductor geometry and operating frequency. It was found that the eddy currents induced in the un-energized coils (hereby-referred to as passive coils) positively correlated with track width and frequency. The magnetic field induced by the eddy currents in the passive coils with wide tracks was several times larger than that induced by eddy currents in the cold shield of cryostat. The power loss in the passive coils increased with the track width. Therefore, intra-coil interactions should be included in the coil design and analysis process.
Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors
NASA Astrophysics Data System (ADS)
Rottmann, J.; Keall, P.; Berbeco, R.
2013-06-01
Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).
Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning
NASA Astrophysics Data System (ADS)
Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.
2018-04-01
At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.
Automated Track Recognition and Event Reconstruction in Nuclear Emulsion
NASA Technical Reports Server (NTRS)
Deines-Jones, P.; Cherry, M. L.; Dabrowska, A.; Holynski, R.; Jones, W. V.; Kolganova, E. D.; Kudzia, D.; Nilsen, B. S.; Olszewski, A.; Pozharova, E. A.;
1998-01-01
The major advantages of nuclear emulsion for detecting charged particles are its submicron position resolution and sensitivity to minimum ionizing particles. These must be balanced, however, against the difficult manual microscope measurement by skilled observers required for the analysis. We have developed an automated system to acquire and analyze the microscope images from emulsion chambers. Each emulsion plate is analyzed independently, allowing coincidence techniques to be used in order to reject back- ground and estimate error rates. The system has been used to analyze a sample of high-multiplicity Pb-Pb interactions (charged particle multiplicities approx. 1100) produced by the 158 GeV/c per nucleon Pb-208 beam at CERN. Automatically reconstructed track lists agree with our best manual measurements to 3%. We describe the image analysis and track reconstruction techniques, and discuss the measurement and reconstruction uncertainties.
Gamma-ray tracking method for pet systems
Mihailescu, Lucian; Vetter, Kai M.
2010-06-08
Gamma-ray tracking methods for use with granular, position sensitive detectors identify the sequence of the interactions taking place in the detector and, hence, the position of the first interaction. The improved position resolution in finding the first interaction in the detection system determines a better definition of the direction of the gamma-ray photon, and hence, a superior source image resolution. A PET system using such a method will have increased efficiency and position resolution.
A complete system for 3D reconstruction of roots for phenotypic analysis.
Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J
2015-01-01
Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodaira, S., E-mail: koda@nirs.go.jp; Kurano, M.; Hosogane, T.
A CR-39 plastic nuclear track detector was used for quality assurance of mixed oxide fuel pellets for next-generation nuclear power plants. Plutonium (Pu) spot sizes and concentrations in the pellets are significant parameters for safe use in the plants. We developed an automatic Pu detection system based on dense α-radiation tracks in the CR-39 detectors. This system would greatly improve image processing time and measurement accuracy, and will be a powerful tool for rapid pellet quality assurance screening.
NASA Astrophysics Data System (ADS)
Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng
2005-01-01
Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne
2007-01-01
The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).
Precision laser automatic tracking system.
Lucy, R F; Peters, C J; McGann, E J; Lang, K T
1966-04-01
A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
Real-Time Imaging System for the OpenPET
NASA Astrophysics Data System (ADS)
Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga
2012-02-01
The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.
A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1996-01-01
NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.
Remote gaze tracking system for 3D environments.
Congcong Liu; Herrup, Karl; Shi, Bertram E
2017-07-01
Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.
Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto
2015-05-01
To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.
Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang
2018-01-01
Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA02185 A Dust Devil Playground Dust Devil activity in this region between Brashear and Ross Craters is very common. Large regions of dust devil tracks surround the south polar region of Mars. Image information: VIS instrument. Latitude -55.2N, Longitude 244.2E. 17 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
NASA Astrophysics Data System (ADS)
Linte, Cristian A.; Rettmann, Maryam E.; Dilger, Ben; Gunawan, Mia S.; Arunachalam, Shivaram P.; Holmes, David R., III; Packer, Douglas L.; Robb, Richard A.
2012-02-01
The novel prototype system for advanced visualization for image-guided left atrial ablation therapy developed in our laboratory permits ready integration of multiple imaging modalities, surgical instrument tracking, interventional devices and electro-physiologic data. This technology allows subject-specific procedure planning and guidance using 3D dynamic, patient-specific models of the patient's heart, augmented with real-time intracardiac echocardiography (ICE). In order for the 2D ICE images to provide intuitive visualization for accurate catheter to surgical target navigation, the transducer must be tracked, so that the acquired images can be appropriately presented with respect to the patient-specific anatomy. Here we present the implementation of a previously developed ultrasound calibration technique for a magnetically tracked ICE transducer, along with a series of evaluation methods to ensure accurate imaging and faithful representation of the imaged structures. Using an engineering-designed phantom, target localization accuracy is assessed by comparing known target locations with their transformed locations inferred from the tracked US images. In addition, the 3D volume reconstruction accuracy is also estimated by comparing a truth volume to that reconstructed from sequential 2D US images. Clinically emulating validation studies are conducted using a patient-specific left atrial phantom. Target localization error of clinically-relevant surgical targets represented by nylon fiducials implanted within the endocardial wall of the phantom was assessed. Our studies have demonstrated 2.4 +/- 0.8 mm target localization error in the engineering-designed evaluation phantoms, 94.8 +/- 4.6 % volume reconstruction accuracy, and 3.1 +/- 1.2 mm target localization error in the left atrial-mimicking phantom. These results are consistent with those disseminated in the literature and also with the accuracy constraints imposed by the employed technology and the clinical application.
Aghayee, Samira; Winkowski, Daniel E; Bowen, Zachary; Marshall, Erin E; Harrington, Matt J; Kanold, Patrick O; Losert, Wolfgang
2017-01-01
The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions.
Aghayee, Samira; Winkowski, Daniel E.; Bowen, Zachary; Marshall, Erin E.; Harrington, Matt J.; Kanold, Patrick O.; Losert, Wolfgang
2017-01-01
The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions. PMID:28860973
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
A vision-based approach for tramway rail extraction
NASA Astrophysics Data System (ADS)
Zwemer, Matthijs H.; van de Wouw, Dennis W. J. M.; Jaspers, Egbert; Zinger, Sveta; de With, Peter H. N.
2015-03-01
The growing traffic density in cities fuels the desire for collision assessment systems on public transportation. For this application, video analysis is broadly accepted as a cornerstone. For trams, the localization of tramway tracks is an essential ingredient of such a system, in order to estimate a safety margin for crossing traffic participants. Tramway-track detection is a challenging task due to the urban environment with clutter, sharp curves and occlusions of the track. In this paper, we present a novel and generic system to detect the tramway track in advance of the tram position. The system incorporates an inverse perspective mapping and a-priori geometry knowledge of the rails to find possible track segments. The contribution of this paper involves the creation of a new track reconstruction algorithm which is based on graph theory. To this end, we define track segments as vertices in a graph, in which edges represent feasible connections. This graph is then converted to a max-cost arborescence graph, and the best path is selected according to its location and additional temporal information based on a maximum a-posteriori estimate. The proposed system clearly outperforms a railway-track detector. Furthermore, the system performance is validated on 3,600 manually annotated frames. The obtained results are promising, where straight tracks are found in more than 90% of the images and complete curves are still detected in 35% of the cases.
Heo, Dan; Lee, Chanjoo; Ku, Minhee; Haam, Seungjoo; Suh, Jin-Suck; Huh, Yong-Min; Park, Sahng Wook; Yang, Jaemoon
2015-08-21
The specific delivery of ribonucleic acid (RNA) interfering molecules to disease-related cells is still a critical blockade for in vivo systemic treatment. Here, this study suggests a robust delivery carrier for targeted delivery of RNA-interfering molecules using galactosylated magnetic nanovectors (gMNVs). gMNVs are an organic-inorganic polymeric nanomaterial composed of polycationics and magnetic nanocrystal for delivery of RNA-interfering molecules and tracking via magnetic resonance (MR) imaging. In particular, the surface of gMNVs was modified by galactosylgluconic groups for targeted delivering to asialoglycoprotein receptor (ASGPR) of hepatocytes. Moreover, the small interfering RNAs were used to regulate target proteins related with low-density lipoprotein level and in vivo MR imaging was conducted for tracking of nanovectors. The obtained results show that the prepared gMNVs demonstrate potential as a systemic theragnostic nanoplatform for RNA interference and MR imaging.
NASA Astrophysics Data System (ADS)
Andreozzi, Jacqueline M.; Zhang, Rongxiao; Glaser, Adam K.; Gladstone, David J.; Jarvis, Lesley A.; Pogue, Brian W.
2016-03-01
External beam radiotherapy utilizes high energy radiation to target cancer with dynamic, patient-specific treatment plans. The otherwise invisible radiation beam can be observed via the optical Cherenkov photons emitted from interaction between the high energy beam and tissue. Using a specialized camera-system, the Cherenkov emission can thus be used to track the radiation beam on the surface of the patient in real-time, even for complex cases such as volumetric modulated arc therapy (VMAT). Two patients undergoing VMAT of the head and neck were imaged and analyzed, and the viability of the system to provide clinical feedback was established.
Optical neural network system for pose determination of spinning satellites
NASA Technical Reports Server (NTRS)
Lee, Andrew; Casasent, David
1990-01-01
An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.
Spatially assisted down-track median filter for GPR image post-processing
Paglieroni, David W; Beer, N Reginald
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Optical bullet-tracking algorithms for weapon localization in urban environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, R S; Breitfeller, E F
2006-03-31
Localization of the sources of small-arms fire, mortars, and rocket propelled grenades is an important problem in urban combat. Weapons of this type produce characteristic signatures, such as muzzle flashes, that are visible in the infrared. Indeed, several systems have been developed that exploit the infrared signature of muzzle flash to locate the positions of shooters. However, systems based on muzzle flash alone can have difficulty localizing weapons if the muzzle flash is obscured or suppressed. Moreover, optical clutter can be problematic to systems that rely on muzzle flash alone. Lawrence Livermore National Laboratory (LLNL) has developed a projectile trackingmore » system that detects and localizes sources of small-arms fire, mortars and similar weapons using the thermal signature of the projectile rather than a muzzle flash. The thermal signature of a projectile, caused by friction as the projectile travels along its trajectory, cannot be concealed and is easily discriminated from optical clutter. The LLNL system was recently demonstrated at the MOUT facility of the Aberdeen Test Center [1]. In the live-fire demonstration, shooters armed with a variety of small-arms, including M-16s, AK-47s, handguns, mortars and rockets, were arranged at several positions in around the facility. Experiments ranged from a single-weapon firing a single-shot to simultaneous fire of all weapons on full automatic. The LLNL projectile tracking system was demonstrated to localize multiple shooters at ranges up to 400m, far greater than previous demonstrations. Furthermore, the system was shown to be immune to optical clutter that is typical in urban combat. This paper describes the image processing and localization algorithms designed to exploit the thermal signature of projectiles for shooter localization. The paper begins with a description of the image processing that extracts projectile information from a sequence of infrared images. Key to the processing is an adaptive spatio-temporal filter developed to suppress scene clutter. The filtered image sequence is further processed to produce a set of parameterized regions, which are classified using several discriminate functions. Regions that are classified as projectiles are passed to a data association algorithm that matches features from these regions with existing tracks, or initializes new tracks as needed. A Kalman filter is used to smooth and extrapolate existing tracks. Shooter locations are determined by solving a combinatorial least-squares solution for all bullet tracks. It also provides an error ellipse for each shooter, quantifying the uncertainty of shooter location. The paper concludes with examples from the live-fire exercise at the Aberdeen Test Center.« less
Development of a Remotely Operated Autonomous Satellite Tracking System
2010-03-01
ability of Commercial-Off-The-Shelf (COTS) optical observation equipment to track and image Low Earth Orbiting (LEO) satellites. Using radar data in...SOR operates one of the world’s premier adaptive-optics telescopes capable of tracking low -earth orbiting satellites. The telescope has a 3.5-meter...student) published his thesis Initial Determination of Low Earth Orbits Using Commercial Telescopes. According to this document’s Problem Statement
Identifying Roads and Trains Under Canopy Using Lidar
2007-09-01
OVERVIEW...................................................................................................17 B. DATA SET LOCATIONS ...imaging systems to detect, track and locate operations in these dense canopy environments is severely limited. One possibility for “seeing through...in detecting, tracking and locating illicit operations previously undetectable. The purpose of this thesis is to determine if roads and trails2 are
Computer analysis of arteriograms
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Armstrong, J. H.; Beckenbach, E. B.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.; Sanmarco, M. E.
1977-01-01
A computer system has been developed to quantify the degree of atherosclerosis in the human femoral artery. The analysis involves first scanning and digitizing angiographic film, then tracking the outline of the arterial image and finally computing the relative amount of roughness or irregularity in the vessel wall. The image processing system and method are described.
Neural net target-tracking system using structured laser patterns
NASA Astrophysics Data System (ADS)
Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun
1996-06-01
In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp
2015-05-15
Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less
Deshmukh, Nishikant P; Kang, Hyun Jae; Billings, Seth D; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M
2014-01-01
A system for real-time ultrasound (US) elastography will advance interventions for the diagnosis and treatment of cancer by advancing methods such as thermal monitoring of tissue ablation. A multi-stream graphics processing unit (GPU) based accelerated normalized cross-correlation (NCC) elastography, with a maximum frame rate of 78 frames per second, is presented in this paper. A study of NCC window size is undertaken to determine the effect on frame rate and the quality of output elastography images. This paper also presents a novel system for Online Tracked Ultrasound Elastography (O-TRuE), which extends prior work on an offline method. By tracking the US probe with an electromagnetic (EM) tracker, the system selects in-plane radio frequency (RF) data frames for generating high quality elastograms. A novel method for evaluating the quality of an elastography output stream is presented, suggesting that O-TRuE generates more stable elastograms than generated by untracked, free-hand palpation. Since EM tracking cannot be used in all systems, an integration of real-time elastography and the da Vinci Surgical System is presented and evaluated for elastography stream quality based on our metric. The da Vinci surgical robot is outfitted with a laparoscopic US probe, and palpation motions are autonomously generated by customized software. It is found that a stable output stream can be achieved, which is affected by both the frequency and amplitude of palpation. The GPU framework is validated using data from in-vivo pig liver ablation; the generated elastography images identify the ablated region, outlined more clearly than in the corresponding B-mode US images.
Deshmukh, Nishikant P.; Kang, Hyun Jae; Billings, Seth D.; Taylor, Russell H.; Hager, Gregory D.; Boctor, Emad M.
2014-01-01
A system for real-time ultrasound (US) elastography will advance interventions for the diagnosis and treatment of cancer by advancing methods such as thermal monitoring of tissue ablation. A multi-stream graphics processing unit (GPU) based accelerated normalized cross-correlation (NCC) elastography, with a maximum frame rate of 78 frames per second, is presented in this paper. A study of NCC window size is undertaken to determine the effect on frame rate and the quality of output elastography images. This paper also presents a novel system for Online Tracked Ultrasound Elastography (O-TRuE), which extends prior work on an offline method. By tracking the US probe with an electromagnetic (EM) tracker, the system selects in-plane radio frequency (RF) data frames for generating high quality elastograms. A novel method for evaluating the quality of an elastography output stream is presented, suggesting that O-TRuE generates more stable elastograms than generated by untracked, free-hand palpation. Since EM tracking cannot be used in all systems, an integration of real-time elastography and the da Vinci Surgical System is presented and evaluated for elastography stream quality based on our metric. The da Vinci surgical robot is outfitted with a laparoscopic US probe, and palpation motions are autonomously generated by customized software. It is found that a stable output stream can be achieved, which is affected by both the frequency and amplitude of palpation. The GPU framework is validated using data from in-vivo pig liver ablation; the generated elastography images identify the ablated region, outlined more clearly than in the corresponding B-mode US images. PMID:25541954
Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-01-01
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171
A preliminary experiment definition for video landmark acquisition and tracking
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Tietz, J. C.; Hulstrom, R. L.; Cunningham, R. A.; Reel, G. M.
1976-01-01
Six scientific objectives/experiments were derived which consisted of agriculture/forestry/range resources, land use, geology/mineral resources, water resources, marine resources and environmental surveys. Computer calculations were then made of the spectral radiance signature of each of 25 candidate targets as seen by a satellite sensor system. An imaging system capable of recognizing, acquiring and tracking specific generic type surface features was defined. A preliminary experiment definition and design of a video Landmark Acquisition and Tracking system is given. This device will search a 10-mile swath while orbiting the earth, looking for land/water interfaces such as coastlines and rivers.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
Xu, Tong; Ducote, Justin L.; Wong, Jerry T.; Molloi, Sabee
2011-01-01
Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual energy system used in this study can acquire up to 15 frame of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1 to 3.0 frames /sec). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual-energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy. PMID:21285477
Xu, Tong; Ducote, Justin L; Wong, Jerry T; Molloi, Sabee
2011-02-21
Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat-panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual-energy system used in this study can acquire up to 15 frames of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1-3.0 frames per second). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, SK; Armour, E; Su, L
Purpose Ultrasound tracking of target motion relies on visibility of vascular and/or anatomical landmark. However this is challenging when the target is located far from vascular structures or in organs that lack ultrasound landmark structure, such as in the case of pancreas cancer. The purpose of this study is to evaluate visibility, artifacts and distortions of fusion coils and solid gold markers in ultrasound, CT, CBCT and kV images to identify markers suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment. Methods Two fusion coils (1mm × 5mm and 1mm × 10 mm) and a solid goldmore » marker (0.8mm × 10mm) were embedded in a tissue–like ultrasound phantom. The phantom (5cm × 12cm × 20cm) was prepared using water, gelatin and psyllium-hydrophilic-mucilloid fiber. Psylliumhydrophilic mucilloid acts as scattering medium to produce echo texture that simulates sonographic appearance of human tissue in ultrasound images while maintaining electron density close to that of water in CT images. Ultrasound images were acquired using 3D-ultrasound system with markers embedded at 5, 10 and 15mm depth from phantom surface. CT images were acquired using Philips Big Bore CT while CBCT and kV images were acquired with XVI-system (Elexta). Visual analysis was performed to compare visibility of the markers and visibility score (1 to 3) were assigned. Results All markers embedded at various depths are clearly visible (score of 3) in ultrasound images. Good visibility of all markers is observed in CT, CBCT and kV images. The degree of artifact produced by the markers in CT and CBCT images are indistinguishable. No distortion is observed in images from any modalities. Conclusion All markers are visible in images across all modalities in this homogenous tissue-like phantom. Human subject data is necessary to confirm the marker type suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment.« less
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
Dunkerley, David A. P.; Slagowski, Jordan M.; Funk, Tobias; Speidel, Michael A.
2017-01-01
Abstract. Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3-D catheter tracking. This work proposes a method of dose-reduced 3-D catheter tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. This is achieved through the selective deactivation of focal spot positions not needed for the catheter tracking task. The technique was retrospectively evaluated with SBDX detector data recorded during a phantom study. DEC imaging of a catheter tip at isocenter required 340 active focal spots per frame versus 4473 spots in full field-of-view (FOV) mode. The dose-area product (DAP) and peak skin dose (PSD) for DEC versus full FOV scanning were calculated using an SBDX Monte Carlo simulation code. The average DAP was reduced to 7.8% of the full FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full FOV value. The root-mean-squared-deviation between DEC-based 3-D tracking coordinates and full FOV 3-D tracking coordinates was less than 0.1 mm. The 3-D distance between the tracked tip and the sheath centerline averaged 0.75 mm. DEC is a feasible method for dose reduction during SBDX 3-D catheter tracking. PMID:28439521
Calibration of 3D ultrasound to an electromagnetic tracking system
NASA Astrophysics Data System (ADS)
Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet
2011-03-01
The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.
NASA Astrophysics Data System (ADS)
Wang, Long-tao; Jiang, Ning; Lv, Ming-shan
2015-10-01
With the emergence of the anti-ship missle with the capability of infrared imaging guidance, the traditional single jamming measures, because of the jamming mechanism and technical flaws or unsuitable use, greatly reduced the survival probability of the war-ship in the future naval battle. Intergrated jamming of IR weakening + smoke-screen Can not only make jamming to the search and tracking of IR imaging guidance system , but also has feasibility in conjunction, besides , which also make the best jamming effect. The research conclusion has important realistic meaning for raising the antimissile ability of surface ships. With the development of guidance technology, infrared guidance system has expanded by ir point-source homing guidance to infrared imaging guidance, Infrared imaging guidance has made breakthrough progress, Infrared imaging guidance system can use two-dimensional infrared image information of the target, achieve the precise tracking. Which has Higher guidance precision, better concealment, stronger anti-interference ability and could Target the key parts. The traditional single infrared smoke screen jamming or infrared decoy flare interference cannot be imposed effective interference. So, Research how to effectively fight against infrared imaging guided weapons threat measures and means, improving the surface ship antimissile ability is an urgent need to solve.
Modular multiple sensors information management for computer-integrated surgery.
Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De
2012-09-01
In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.
Object tracking using plenoptic image sequences
NASA Astrophysics Data System (ADS)
Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung
2017-05-01
Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.
Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru
2011-06-01
Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Yang, Haw; Welsher, Kevin
2016-11-15
A system and method for non-invasively tracking a particle in a sample is disclosed. The system includes a 2-photon or confocal laser scanning microscope (LSM) and a particle-holding device coupled to a stage with X-Y and Z position control. The system also includes a tracking module having a tracking excitation laser, X-Y and Z radiation-gathering components configured to detect deviations of the particle in an X-Y and Z directions. The system also includes a processor coupled to the X-Y and Z radiation gathering components, generate control signals configured to drive the stage X-Y and Z position controls to track the movement of the particle. The system may also include a synchronization module configured to generate LSM pixels stamped with stage position and a processing module configured to generate a 3D image showing the 3D trajectory of a particle using the LSM pixels stamped with stage position.
Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2014-01-01
Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845
Automatic tracking of labeled red blood cells in microchannels.
Pinho, Diana; Lima, Rui; Pereira, Ana I; Gayubo, Fernando
2013-09-01
The current study proposes an automatic method for the segmentation and tracking of red blood cells flowing through a 100- μm glass capillary. The original images were obtained by means of a confocal system and then processed in MATLAB using the Image Processing Toolbox. The measurements obtained with the proposed automatic method were compared with the results determined by a manual tracking method. The comparison was performed by using both linear regressions and Bland-Altman analysis. The results have shown a good agreement between the two methods. Therefore, the proposed automatic method is a powerful way to provide rapid and accurate measurements for in vitro blood experiments in microchannels. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui
2018-03-01
The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.
Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S
2014-10-01
Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.
A real-time single sperm tracking, laser trapping, and ratiometric fluorescent imaging system
NASA Astrophysics Data System (ADS)
Shi, Linda Z.; Botvinick, Elliot L.; Nascimento, Jaclyn; Chandsawangbhuwana, Charlie; Berns, Michael W.
2006-08-01
Sperm cells from a domestic dog were treated with oxacarbocyanine DiOC II(3), a ratiometrically-encoded membrane potential fluorescent probe in order to monitor the mitochondria stored in an individual sperm's midpiece. This dye normally emits a red fluorescence near 610 nm as well as a green fluorescence near 515 nm. The ratio of red to green fluorescence provides a substantially accurate and precise measurement of sperm midpiece membrane potential. A two-level computer system has been developed to quantify the motility and energetics of sperm using video rate tracking, automated laser trapping (done by the upper-level system) and fluorescent imaging (done by the lower-level system). The communication between these two systems is achieved by a networked gigabit TCP/IP cat5e crossover connection. This allows for the curvilinear velocity (VCL) and ratio of the red to green fluorescent images of individual sperm to be written to the hard drive at video rates. This two-level automatic system has increased experimental throughput over our previous single-level system (Mei et al., 2005) by an order of magnitude.
Saenz, Daniel L.; Yan, Yue; Christensen, Neil; Henzler, Margaret A.; Forrest, Lisa J.; Bayouth, John E.
2015-01-01
ViewRay is a novel MR‐guided radiotherapy system capable of imaging in near real‐time at four frames per second during treatment using 0.35T field strength. It allows for improved gating techniques and adaptive radiotherapy. Three cobalt‐60 sources (∼15,000 Curies) permit multiple‐beam, intensity‐modulated radiation therapy. The primary aim of this study is to assess the imaging stability, accuracy, and automatic segmentation algorithm capability to track motion in simulated and in vivo targets. Magnetic resonance imaging (MRI) characteristics of the system were assessed using the American College of Radiology (ACR)‐recommended phantom and accreditation protocol. Images of the ACR phantom were acquired using a head coil following the ACR scanning instructions. ACR recommended T1‐ and T2‐weighted sequences were evaluated. Nine measurements were performed over a period of seven months, on just over a monthly basis, to establish consistency. A silicon dielectric gel target was attached to the motor via a rod. 40 mm total amplitude was used with cycles of 3 to 9 s in length in a sinusoidal trajectory. Trajectories of six moving clinical targets in four canine patients were quantified and tracked. ACR phantom images were analyzed, and the results were compared with the ACR acceptance levels. Measured slice thickness accuracies were within the acceptance limits. In the 0.35 T system, the image intensity uniformity was also within the ACR acceptance limit. Over the range of cycle lengths, representing a wide range of breathing rates in patients imaged at four frames/s, excellent agreement was observed between the expected and measured target trajectories. In vivo canine targets, including the gross target volume (GTV), as well as other abdominal soft tissue structures, were visualized with inherent MR contrast, allowing for preliminary results of target tracking. PACS number: 87.61.Tg PMID:26699552
Saenz, Daniel L; Yan, Yue; Christensen, Neil; Henzler, Margaret A; Forrest, Lisa J; Bayouth, John E; Paliwal, Bhudatt R
2015-11-08
ViewRay is a novel MR-guided radiotherapy system capable of imaging in near real-time at four frames per second during treatment using 0.35T field strength. It allows for improved gating techniques and adaptive radiotherapy. Three cobalt-60 sources (~ 15,000 Curies) permit multiple-beam, intensity-modulated radiation therapy. The primary aim of this study is to assess the imaging stability, accuracy, and automatic segmentation algorithm capability to track motion in simulated and in vivo targets. Magnetic resonance imaging (MRI) characteristics of the system were assessed using the American College of Radiology (ACR)-recommended phantom and accreditation protocol. Images of the ACR phantom were acquired using a head coil following the ACR scanning instructions. ACR recommended T1- and T2-weighted sequences were evaluated. Nine measurements were performed over a period of seven months, on just over a monthly basis, to establish consistency. A silicon dielectric gel target was attached to the motor via a rod. 40 mm total amplitude was used with cycles of 3 to 9 s in length in a sinusoidal trajectory. Trajectories of six moving clinical targets in four canine patients were quantified and tracked. ACR phantom images were analyzed, and the results were compared with the ACR acceptance levels. Measured slice thickness accuracies were within the acceptance limits. In the 0.35 T system, the image intensity uniformity was also within the ACR acceptance limit. Over the range of cycle lengths, representing a wide range of breathing rates in patients imaged at four frames/s, excellent agreement was observed between the expected and measured target trajectories. In vivo canine targets, including the gross target volume (GTV), as well as other abdominal soft tissue structures, were visualized with inherent MR contrast, allowing for preliminary results of target tracking.
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
Lagrangian 3D tracking of fluorescent microscopic objects in motion
NASA Astrophysics Data System (ADS)
Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Lagrangian 3D tracking of fluorescent microscopic objects in motion.
Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Airborne optical tracking control system design study
NASA Astrophysics Data System (ADS)
1992-09-01
The Kestrel LOS Tracking Program involves the development of a computer and algorithms for use in passive tracking of airborne targets from a high altitude balloon platform. The computer receivers track error signals from a video tracker connected to one of the imaging sensors. In addition, an on-board IRU (gyro), accelerometers, a magnetometer, and a two-axis inclinometer provide inputs which are used for initial acquisitions and course and fine tracking. Signals received by the control processor from the video tracker, IRU, accelerometers, magnetometer, and inclinometer are utilized by the control processor to generate drive signals for the payload azimuth drive, the Gimballed Mirror System (GMS), and the Fast Steering Mirror (FSM). The hardware which will be procured under the LOS tracking activity is the Controls Processor (CP), the IRU, and the FSM. The performance specifications for the GMS and the payload canister azimuth driver are established by the LOS tracking design team in an effort to achieve a tracking jitter of less than 3 micro-rad, 1 sigma for one axis.
In vivo cell tracking and quantification method in adult zebrafish
NASA Astrophysics Data System (ADS)
Zhang, Li; Alt, Clemens; Li, Pulin; White, Richard M.; Zon, Leonard I.; Wei, Xunbin; Lin, Charles P.
2012-03-01
Zebrafish have become a powerful vertebrate model organism for drug discovery, cancer and stem cell research. A recently developed transparent adult zebrafish using double pigmentation mutant, called casper, provide unparalleled imaging power in in vivo longitudinal analysis of biological processes at an anatomic resolution not readily achievable in murine or other systems. In this paper we introduce an optical method for simultaneous visualization and cell quantification, which combines the laser scanning confocal microscopy (LSCM) and the in vivo flow cytometry (IVFC). The system is designed specifically for non-invasive tracking of both stationary and circulating cells in adult zebrafish casper, under physiological conditions in the same fish over time. The confocal imaging part in this system serves the dual purposes of imaging fish tissue microstructure and a 3D navigation tool to locate a suitable vessel for circulating cell counting. The multi-color, multi-channel instrument allows the detection of multiple cell populations or different tissues or organs simultaneously. We demonstrate initial testing of this novel instrument by imaging vasculature and tracking circulating cells in CD41: GFP/Gata1: DsRed transgenic casper fish whose thrombocytes/erythrocytes express the green and red fluorescent proteins. Circulating fluorescent cell incidents were recorded and counted repeatedly over time and in different types of vessels. Great application opportunities in cancer and stem cell researches are discussed.
Automatic Intra-Operative Stitching of Non-Overlapping Cone-Beam CT Acquisitions
Fotouhi, Javad; Fuerst, Bernhard; Unberath, Mathias; Reichenstein, Stefan; Lee, Sing Chun; Johnson, Alex A.; Osgood, Greg M.; Armand, Mehran; Navab, Nassir
2018-01-01
Purpose Cone-Beam Computed Tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and non-overlapping CBCT volumes to enable 3D measurements on large anatomical structures. Methods A CBCT-capable mobile C-arm is augmented with a Red-Green-Blue-Depth (RGBD) camera. An off-line co-calibration of the two imaging modalities results in co-registered video, infrared, and X-ray views of the surgical scene. Then, automatic stitching of multiple small, non-overlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. Results On an animal cadaver, we show stitching errors as low as 0.33 mm, 0.91 mm, and 1.72mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. Conclusions The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures. PMID:29569728
[Design of longitudinal auto-tracking of the detector on X-ray in digital radiography].
Yu, Xiaomin; Jiang, Tianhao; Liu, Zhihong; Zhao, Xu
2018-04-01
One algorithm is designed to implement longitudinal auto-tracking of the the detector on X-ray in the digital radiography system (DR) with manual collimator. In this study, when the longitudinal length of field of view (LFOV) on the detector is coincided with the longitudinal effective imaging size of the detector, the collimator half open angle ( Ψ ), the maximum centric distance ( e max ) between the center of X-ray field of view and the projection center of the focal spot, and the detector moving distance for auto-traking can be calculated automatically. When LFOV is smaller than the longitudinal effective imaging size of the detector by reducing Ψ , the e max can still be used to calculate the detector moving distance. Using this auto-tracking algorithm in DR with manual collimator, the tested results show that the X-ray projection is totally covered by the effective imaging area of the detector, although the center of the field of view is not aligned with the center of the effective imaging area of the detector. As a simple and low-cost design, the algorithm can be used for longitudinal auto-tracking of the detector on X-ray in the manual collimator DR.
Novel imaging closed loop control strategy for heliostats
NASA Astrophysics Data System (ADS)
Bern, Gregor; Schöttl, Peter; Heimsath, Anna; Nitz, Peter
2017-06-01
Central Receiver Systems use up to thousands of heliostats to concentrate solar radiation. The precise control of heliostat aiming points is crucial not only for efficiency but also for reliable plant operation. Besides the calibration of open loop control systems, closed loop tracking strategies are developed to address a precise and efficient aiming strategy. The need for cost reductions in the heliostat field intensifies the motivation for economic closed loop control systems. This work introduces an approach for a closed loop heliostat tracking strategy using image analysis and signal modulation. The approach aims at the extraction of heliostat focal spot position within the receiver domain by means of a centralized remote vision system decoupled from the rough conditions close to the focal area. Taking an image sequence of the receiver while modulating a signal on different heliostats, their aiming points are retrieved. The work describes the methodology and shows first results from simulations and practical tests performed in small scale, motivating further investigation and deployment.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
Interferometric scattering (iSCAT) microscopy: studies of biological membrane dynamics
NASA Astrophysics Data System (ADS)
Reina, Francesco; Galiani, Silvia; Shrestha, Dilip; Sezgin, Erdinc; Lagerholm, B. Christoffer; Cole, Daniel; Kukura, Philipp; Eggeling, Christian
2018-02-01
The study of the organization and dynamics of molecules in model and cellular membranes is an important topic in contemporary biophysics. Imaging and single particle tracking in this particular field, however, proves particularly demanding, as it requires simultaneously high spatio-temporal resolution and high signal-to-noise ratios. A remedy to this challenge might be Interferometric Scattering (iSCAT) microscopy, due to its fast sampling rates, label-free imaging capabilities and, most importantly, tuneable signal level output. Here we report our recent advances in the imaging and molecular tracking on phase-separated model membrane systems and live-cell membranes using this technique.
Intra-coil interactions in split gradient coils in a hybrid MRI-LINAC system.
Tang, Fangfang; Freschi, Fabio; Sanchez Lopez, Hector; Repetto, Maurizio; Liu, Feng; Crozier, Stuart
2016-04-01
An MRI-LINAC system combines a magnetic resonance imaging (MRI) system with a medical linear accelerator (LINAC) to provide image-guided radiotherapy for targeting tumors in real-time. In an MRI-LINAC system, a set of split gradient coils is employed to produce orthogonal gradient fields for spatial signal encoding. Owing to this unconventional gradient configuration, eddy currents induced by switching gradient coils on and off may be of particular concern. It is expected that strong intra-coil interactions in the set will be present due to the constrained return paths, leading to potential degradation of the gradient field linearity and image distortion. In this study, a series of gradient coils with different track widths have been designed and analyzed to investigate the electromagnetic interactions between coils in a split gradient set. A driving current, with frequencies from 100 Hz to 10 kHz, was applied to study the inductive coupling effects with respect to conductor geometry and operating frequency. It was found that the eddy currents induced in the un-energized coils (hereby-referred to as passive coils) positively correlated with track width and frequency. The magnetic field induced by the eddy currents in the passive coils with wide tracks was several times larger than that induced by eddy currents in the cold shield of cryostat. The power loss in the passive coils increased with the track width. Therefore, intra-coil interactions should be included in the coil design and analysis process. Copyright © 2016 Elsevier Inc. All rights reserved.
A micro-fluidic treadmill for observing suspended plankton in the lab
NASA Astrophysics Data System (ADS)
Jaffe, J. S.; Laxton, B.; Garwood, J. C.; Franks, P. J. S.; Roberts, P. L.
2016-02-01
A significant obstacle to laboratory studies of interactions between small organisms ( mm) and their fluid environment is our ability to obtain high-resolution images while allowing freedom of motion. This is because as the organisms sink, they will often move out of the field of view of the observation system. One solution to this problem is to impose a water circulation pattern that preserves their location relative to the camera system while imaging the organisms away from the glass walls. To accomplish this we have designed and created a plankton treadmill. Our computer-controlled system consists of a digital video camera attached to a macro or microscope and a micro-fluidic pump whose flow is regulated to maintain a suspended organism's position relative to the field of view. Organisms are detected and tracked in real time in the video frames, allowing a control algorithm to compensate for any vertical movement by adjusting the flow. The flow control can be manually adjusted using on-screen controls, semi-automatically adjusted to allow the user to select a particular organism to be tracked or fully automatic through the use of classification and tracking algorithms. Experiments with a simple cm-sized cuvette and a number of organisms that are both positively and negatively buoyant have demonstrated the success of the system in permitting longer observation times than would be possible in the absence of a controlled-flow environment. The subjects were observed using a new dual-view, holographic imaging system that provides 3-dimensional microscopic observations with relatively isotropic resolution. We will present the system design, construction, the control algorithm, and some images obtained with the holographic system, demonstrating its effectiveness. Small particles seeded into the flow clearly show the 3D flow fields around the subjects as they freely sink or swim.
LANDSAT-4 MSS Geometric Correction: Methods and Results
NASA Technical Reports Server (NTRS)
Brooks, J.; Kimmer, E.; Su, J.
1984-01-01
An automated image registration system such as that developed for LANDSAT-4 can produce all of the information needed to verify and calibrate the software and to evaluate system performance. The on-line MSS archive generation process which upgrades systematic correction data to geodetic correction data is described as well as the control point library build subsystem which generates control point chips and support data for on-line upgrade of correction data. The system performance was evaluated for both temporal and geodetic registration. For temporal registration, 90% errors were computed to be .36 IFOV (instantaneous field of view) = 82.7 meters) cross track, and .29 IFOV along track. Also, for actual production runs monitored, the 90% errors were .29 IFOV cross track and .25 IFOV along track. The system specification is .3 IFOV, 90% of the time, both cross and along track. For geodetic registration performance, the model bias was measured by designating control points in the geodetically corrected imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin
Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less
Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa
2016-08-01
For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Awan, Omer Abdulrehman; van Wagenberg, Frans; Daly, Mark; Safdar, Nabile; Nagy, Paul
2011-04-01
Many radiology information systems (RIS) cannot accept a final report from a dictation reporting system before the exam has been completed in the RIS by a technologist. A radiologist can still render a report in a reporting system once images are available, but the RIS and ancillary systems may not get the results because of the study's uncompleted status. This delay in completing the study caused an alarming number of delayed reports and was undetected by conventional RIS reporting techniques. We developed a Web-based reporting tool to monitor uncompleted exams and automatically page section supervisors when a report was being delayed by its incomplete status in the RIS. Institutional Review Board exemption was obtained. At four imaging centers, a Python script was developed to poll the dictation system every 10 min for exams in five different modalities that were signed by the radiologist but could not be sent to the RIS. This script logged the exams into an existing Web-based tracking tool using PHP and a MySQL database. The script also text-paged the modality supervisor. The script logged the time at which the report was finally sent, and statistics were aggregated onto a separate Web-based reporting tool. Over a 1-year period, the average number of uncompleted exams per month and time to problem resolution decreased at every imaging center and in almost every imaging modality. Automated feedback provides a vital link in improving technologist performance and patient care without assigning a human resource to manage report queues.
Stability Measurements for Alignment of the NIF Neutron Imaging System Pinhole Array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fittinghoff, D N; Bower, D E; Drury, O B
2011-03-29
The alignment system for the National Ignition Facility's neutron imaging system has been commissioned and measurements of the relative stability of the 90-315 DIM, the front and the back of the neutron imaging pinhole array and an exploding pusher target have been made using the 90-135 and the 90-258 opposite port alignment systems. Additionally, a laser beam shot from the neutron-imaging Annex and reflected from a mirror at the back of the pinhole array was used to monitor the pointing of the pinhole. Over a twelve hour period, the relative stability of these parts was found to be within {approx}more » {+-}18 {micro}m rms even when using manual methods for tracking the position of the objects. For highly visible features, use of basic particle tracking techniques found that the front of the pinhole array was stable relative to the 90-135 opposite port alignment camera to within {+-}3.4 {micro}m rms. Reregistration, however, of the opposite port alignment systems themselves using the target alignment sensor was found to change the expected position of target chamber center by up to 194 {micro}m.« less
Feature tracking for automated volume of interest stabilization on 4D-OCT images
NASA Astrophysics Data System (ADS)
Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias
2017-03-01
A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.
Star-Mapping Tools Enable Tracking of Endangered Animals
NASA Technical Reports Server (NTRS)
2009-01-01
Software programmer Jason Holmberg of Portland, Oregon, partnered with a Goddard Space Flight Center astrophysicist to develop a method for tracking the elusive whale shark using the unique spot patterns on the fish s skin. Employing a star-mapping algorithm originally designed for the Hubble Space Telescope, Holmberg created the Shepherd Project, a photograph database and pattern-matching system that can identify whale sharks by their spots and match images contributed to the database by photographers from around the world. The system has been adapted for tracking other rare and endangered animals, including polar bears and ocean sunfish.
Interactive target tracking for persistent wide-area surveillance
NASA Astrophysics Data System (ADS)
Ersoy, Ilker; Palaniappan, Kannappan; Seetharaman, Guna S.; Rao, Raghuveer M.
2012-06-01
Persistent aerial surveillance is an emerging technology that can provide continuous, wide-area coverage from an aircraft-based multiple-camera system. Tracking targets in these data sets is challenging for vision algorithms due to large data (several terabytes), very low frame rate, changing viewpoint, strong parallax and other imperfections due to registration and projection. Providing an interactive system for automated target tracking also has additional challenges that require online algorithms that are seamlessly integrated with interactive visualization tools to assist the user. We developed an algorithm that overcomes these challenges and demonstrated it on data obtained from a wide-area imaging platform.
Expanding the use of real-time electromagnetic tracking in radiation oncology.
Shah, Amish P; Kupelian, Patrick A; Willoughby, Twyla R; Meeks, Sanford L
2011-11-15
In the past 10 years, techniques to improve radiotherapy delivery, such as intensity-modulated radiation therapy (IMRT), image-guided radiation therapy (IGRT) for both inter- and intrafraction tumor localization, and hypofractionated delivery techniques such as stereotactic body radiation therapy (SBRT), have evolved tremendously. This review article focuses on only one part of that evolution, electromagnetic tracking in radiation therapy. Electromagnetic tracking is still a growing technology in radiation oncology and, as such, the clinical applications are limited, the expense is high, and the reimbursement is insufficient to cover these costs. At the same time, current experience with electromagnetic tracking applied to various clinical tumor sites indicates that the potential benefits of electromagnetic tracking could be significant for patients receiving radiation therapy. Daily use of these tracking systems is minimally invasive and delivers no additional ionizing radiation to the patient, and these systems can provide explicit tumor motion data. Although there are a number of technical and fiscal issues that need to be addressed, electromagnetic tracking systems are expected to play a continued role in improving the precision of radiation delivery.
Color image processing and object tracking workstation
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Paulick, Michael J.
1992-01-01
A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.
Image-based tracking: a new emerging standard
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Randall, Scott
2012-06-01
Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiinoki, T; Hanazawa, H; Shibuya, K
Purpose: The respirato ry gating system combined the TrueBeam and a new real-time tumor-tracking radiotherapy system (RTRT) was installed. The RTRT system consists of two x-ray tubes and color image intensifiers. Using fluoroscopic images, the fiducial marker which was implanted near the tumor was tracked and was used as the internal surrogate for respiratory gating. The purposes of this study was to develop the verification technique of the respiratory gating with the new RTRT using cine electronic portal image device images (EPIDs) of TrueBeam and log files of the RTRT. Methods: A patient who underwent respiratory gated SBRT of themore » lung using the RTRT were enrolled in this study. For a patient, the log files of three-dimensional coordinate of fiducial marker used as an internal surrogate were acquired using the RTRT. Simultaneously, the cine EPIDs were acquired during respiratory gated radiotherapy. The data acquisition was performed for one field at five sessions during the course of SBRT. The residual motion errors were calculated using the log files (E{sub log}). The fiducial marker used as an internal surrogate into the cine EPIDs was automatically extracted by in-house software based on the template-matching algorithm. The differences between the the marker positions of cine EPIDs and digitally reconstructed radiograph were calculated (E{sub EPID}). Results: Marker detection on EPID using in-house software was influenced by low image contrast. For one field during the course of SBRT, the respiratory gating using the RTRT showed the mean ± S.D. of 95{sup th} percentile E{sub EPID} were 1.3 ± 0.3 mm,1.1 ± 0.5 mm,and those of E{sub log} were 1.5 ± 0.2 mm, 1.1 ± 0.2 mm in LR and SI directions, respectively. Conclusion: We have developed the verification method of respiratory gating combined TrueBeam and new real-time tumor-tracking radiotherapy system using EPIDs and log files.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najafi, M; Han, B; Hancock, S
Purpose: Prostate SABR is emerging as a clinically viable, potentially cost effective alternative to prostate IMRT but its adoption is contingent on providing solutions for accurate tracking during beam delivery. Our goal is to evaluate the performance of the Clarity Autoscan ultrasound monitoring system for inter-fractional prostate motion tracking in both phantoms and in-vivo. Methods: In-vivo evaluation was performed under IRB protocol to allow data collection in prostate patients treated with VMAT whereby prostate was imaged through the acoustic window of the perineum. The probe was placed before KV imaging and real-time tracking was started and continued until the endmore » of treatment. Initial absolute 3D positions of fiducials were estimated from KV images. Fiducial positions in MV images subsequently acquired during beam delivery were compared with predicted positions based on Clarity estimated motion. Results: Phantom studies with motion amplitudes of ±1.5, ±3, ±6 mm in lateral direction and ±2 mm in longitudinal direction resulted in tracking errors of −0.03 ± 0.3, −0.04 ± 0.6, −0.2 ± 0.9 mm, respectively, in lateral direction and −0.05 ± 0.30 mm in longitudinal direction. In phantom, measured and predicted fiducial positions in MV images were within 0.1 ± 0.6 mm. Four patients consented to participate in the study and data was acquired over a total of 140 fractions. MV imaging tracking was possible in about 75% of the time (due to occlusion of fiducials) compared to 100% with Clarity. Overall range of estimated motion by Clarity was 0 to 4.0 mm. In-vivo fiducial localization error was 1.2 ± 1.0 mm compared to 1.8 ± 1.9 mm if not taking Clarity estimated motion into account. Conclusion: Real-time transperineal ultrasound tracking reduces uncertainty in prostate position due to intrafractional motion. Research was supported by Elekta.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borovetz, H.S.; Shaffer, F.; Schaub, R.
This paper discusses a series of experiments to visualize and measure flow fields in the Novacor left ventricular assist system (LVAS). The experiments utilize a multiple exposure, optical imaging technique called fluorescent image tracking velocimetry (FITV) to hack the motion of small, neutrally-buoyant particles in a flowing fluid.
Multiple Drosophila Tracking System with Heading Direction
Sirigrivatanawong, Pudith; Arai, Shogo; Thoma, Vladimiros; Hashimoto, Koichi
2017-01-01
Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly Drosophila melanogaster, a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities. PMID:28067800
Decoupled tracking and thermal monitoring of non-stationary targets.
Tan, Kok Kiong; Zhang, Yi; Huang, Sunan; Wong, Yoke San; Lee, Tong Heng
2009-10-01
Fault diagnosis and predictive maintenance address pertinent economic issues relating to production systems as an efficient technique can continuously monitor key health parameters and trigger alerts when critical changes in these variables are detected, before they lead to system failures and production shutdowns. In this paper, we present a decoupled tracking and thermal monitoring system which can be used on non-stationary targets of closed systems such as machine tools. There are three main contributions from the paper. First, a vision component is developed to track moving targets under a monitor. Image processing techniques are used to resolve the target location to be tracked. Thus, the system is decoupled and applicable to closed systems without the need for a physical integration. Second, an infrared temperature sensor with a built-in laser for locating the measurement spot is deployed for non-contact temperature measurement of the moving target. Third, a predictive motion control system holds the thermal sensor and follows the moving target efficiently to enable continuous temperature measurement and monitoring.
GCaMP expression in retinal ganglion cells characterized using a low-cost fundus imaging system
NASA Astrophysics Data System (ADS)
Chang, Yao-Chuan; Walston, Steven T.; Chow, Robert H.; Weiland, James D.
2017-10-01
Objective. Virus-transduced, intracellular-calcium indicators are effective reporters of neural activity, offering the advantage of cell-specific labeling. Due to the existence of an optimal time window for the expression of calcium indicators, a suitable tool for tracking GECI expression in vivo following transduction is highly desirable. Approach. We developed a noninvasive imaging approach based on a custom-modified, low-cost fundus viewing system that allowed us to monitor and characterize in vivo bright-field and fluorescence images of the mouse retina. AAV2-CAG-GCaMP6f was injected into a mouse eye. The fundus imaging system was used to measure fluorescence at several time points post injection. At defined time points, we prepared wholemount retina mounted on a transparent multielectrode array and used calcium imaging to evaluate the responsiveness of retinal ganglion cells (RGCs) to external electrical stimulation. Main results. The noninvasive fundus imaging system clearly resolves individual (RGCs and axons. RGC fluorescence intensity and the number of observable fluorescent cells show a similar rising trend from week 1 to week 3 after viral injection, indicating a consistent increase of GCaMP6f expression. Analysis of the in vivo fluorescence intensity trend and in vitro neurophysiological responsiveness shows that the slope of intensity versus days post injection can be used to estimate the optimal time for calcium imaging of RGCs in response to external electrical stimulation. Significance. The proposed fundus imaging system enables high-resolution digital fundus imaging in the mouse eye, based on off-the-shelf components. The long-term tracking experiment with in vitro calcium imaging validation demonstrates the system can serve as a powerful tool monitoring the level of genetically-encoded calcium indicator expression, further determining the optimal time window for following experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panfil, J; Patel, R; Surucu, M
Purpose: To compare markerless template-based tracking of lung tumors using dual energy (DE) cone-beam computed tomography (CBCT) projections versus single energy (SE) CBCT projections. Methods: A RANDO chest phantom with a simulated tumor in the upper right lung was used to investigate the effectiveness of tumor tracking using DE and SE CBCT projections. Planar kV projections from CBCT acquisitions were captured at 60 kVp (4 mAs) and 120 kVp (1 mAs) using the Varian TrueBeam and non-commercial iTools Capture software. Projections were taken at approximately every 0.53° while the gantry rotated. Due to limitations of the phantom, angles for whichmore » the shoulders blocked the tumor were excluded from tracking analysis. DE images were constructed using a weighted logarithmic subtraction that removed bony anatomy while preserving soft tissue structures. The tumors were tracked separately on DE and SE (120 kVp) images using a template-based tracking algorithm. The tracking results were compared to ground truth coordinates designated by a physician. Matches with a distance of greater than 3 mm from ground truth were designated as failing to track. Results: 363 frames were analyzed. The algorithm successfully tracked the tumor on 89.8% (326/363) of DE frames compared to 54.3% (197/363) of SE frames (p<0.0001). Average distance between tracking and ground truth coordinates was 1.27 +/− 0.67 mm for DE versus 1.83+/−0.74 mm for SE (p<0.0001). Conclusion: This study demonstrates the effectiveness of markerless template-based tracking using DE CBCT. DE imaging resulted in better detectability with more accurate localization on average versus SE. Supported by a grant from Varian Medical Systems.« less
Design of tracking and detecting lens system by diffractive optical method
NASA Astrophysics Data System (ADS)
Yang, Jiang; Qi, Bo; Ren, Ge; Zhou, Jianwei
2016-10-01
Many target-tracking applications require an optical system to acquire the target for tracking and identification. This paper describes a new detecting optical system that can provide automatic flying object detecting, tracking and measuring in visible band. The main feature of the detecting lens system is the combination of diffractive optics with traditional lens design by a technique was invented by Schupmann. Diffractive lens has great potential for developing the larger aperture and lightweight lens. First, the optical system scheme was described. Then the Schupmann achromatic principle with diffractive lens and corrective optics is introduced. According to the technical features and requirements of the optical imaging system for detecting and tracking, we designed a lens system with flat surface Fresnel lens and cancels the optical system chromatic aberration by another flat surface Fresnel lens with effective focal length of 1980mm, an F-Number of F/9.9 and a field of view of 2ωω = 14.2', spatial resolution of 46 lp/mm and a working wavelength range of 0.6 0.85um. At last, the system is compact and easy to fabricate and assembly, the diffuse spot size and MTF function and other analysis provide good performance.
Oxygen Nanobubble Tracking by Light Scattering in Single Cells and Tissues.
Bhandari, Pushpak; Wang, Xiaolei; Irudayaraj, Joseph
2017-03-28
Oxygen nanobubbles (ONBs) have significant potential in targeted imaging and treatment in cancer diagnosis and therapy. Precise localization and tracking of single ONBs is demonstrated based on hyperspectral dark-field microscope (HSDFM) to image and track single oxygen nanobubbles in single cells. ONBs were proposed as promising contrast-generating imaging agents due to their strong light scattering generated from nonuniformity of refractive index at the interface. With this powerful platform, we have revealed the trajectories and quantities of ONBs in cells, and demonstrated the relation between the size and diffusion coefficient. We have also evaluated the presence of ONBs in the nucleus with respect to an increase in incubation time and have quantified the uptake in single cells in ex vivo tumor tissues. Our results demonstrate that HSDFM can be a versatile platform to detect and measure cellulosic nanoparticles at the single-cell level and to assess the dynamics and trajectories of this delivery system.
CellTracker (not only) for dummies.
Piccinini, Filippo; Kiss, Alexa; Horvath, Peter
2016-03-15
Time-lapse experiments play a key role in studying the dynamic behavior of cells. Single-cell tracking is one of the fundamental tools for such analyses. The vast majority of the recently introduced cell tracking methods are limited to fluorescently labeled cells. An equally important limitation is that most software cannot be effectively used by biologists without reasonable expertise in image processing. Here we present CellTracker, a user-friendly open-source software tool for tracking cells imaged with various imaging modalities, including fluorescent, phase contrast and differential interference contrast (DIC) techniques. CellTracker is written in MATLAB (The MathWorks, Inc., USA). It works with Windows, Macintosh and UNIX-based systems. Source code and graphical user interface (GUI) are freely available at: http://celltracker.website/ horvath.peter@brc.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
GeoTrack: bio-inspired global video tracking by networks of unmanned aircraft systems
NASA Astrophysics Data System (ADS)
Barooah, Prabir; Collins, Gaemus E.; Hespanha, João P.
2009-05-01
Research from the Institute for Collaborative Biotechnologies (ICB) at the University of California at Santa Barbara (UCSB) has identified swarming algorithms used by flocks of birds and schools of fish that enable these animals to move in tight formation and cooperatively track prey with minimal estimation errors, while relying solely on local communication between the animals. This paper describes ongoing work by UCSB, the University of Florida (UF), and the Toyon Research Corporation on the utilization of these algorithms to dramatically improve the capabilities of small unmanned aircraft systems (UAS) to cooperatively locate and track ground targets. Our goal is to construct an electronic system, called GeoTrack, through which a network of hand-launched UAS use dedicated on-board processors to perform multi-sensor data fusion. The nominal sensors employed by the system will EO/IR video cameras on the UAS. When GMTI or other wide-area sensors are available, as in a layered sensing architecture, data from the standoff sensors will also be fused into the GeoTrack system. The output of the system will be position and orientation information on stationary or mobile targets in a global geo-stationary coordinate system. The design of the GeoTrack system requires significant advances beyond the current state-of-the-art in distributed control for a swarm of UAS to accomplish autonomous coordinated tracking; target geo-location using distributed sensor fusion by a network of UAS, communicating over an unreliable channel; and unsupervised real-time image-plane video tracking in low-powered computing platforms.
NASA Astrophysics Data System (ADS)
Liu, Wen P.; Armand, Mehran; Otake, Yoshito; Taylor, Russell H.
2011-03-01
Percutaneous femoroplasty [1], or femoral bone augmentation, is a prospective alternative treatment for reducing the risk of fracture in patients with severe osteoporosis. We are developing a surgical robotics system that will assist orthopaedic surgeons in planning and performing a patient-specific, augmentation of the femur with bone cement. This collaborative project, sponsored by the National Institutes of Health (NIH), has been the topic of previous publications [2],[3] from our group. This paper presents modifications to the pose recovery of a fluoroscope tracking (FTRAC) fiducial during our process of 2D/3D registration of X-ray intraoperative images to preoperative CT data. We show improved automata of the initial pose estimation as well as lower projection errors with the advent of a multiimage pose optimization step.
NASA Astrophysics Data System (ADS)
Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won
2005-12-01
The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
SKYWARD: the next generation airborne infrared search and track
NASA Astrophysics Data System (ADS)
Fortunato, L.; Colombi, G.; Ondini, A.; Quaranta, C.; Giunti, C.; Sozzi, B.; Balzarotti, G.
2016-05-01
Infrared Search and Track systems are an essential element of the modern and future combat aircrafts. Passive automatic search, detection and tracking functions, are key points for silent operations or jammed tactical scenarios. SKYWARD represents the latest evolution of IRST technology in which high quality electro-optical components, advanced algorithms, efficient hardware and software solutions are harmonically integrated to provide high-end affordable performances. Additionally, the reduction of critical opto-mechanical elements optimises weight and volume and increases the overall reliability. Multiple operative modes dedicated to different situations are available; many options can be selected among multiple or single target tracking, for surveillance or engagement, and imaging, for landing or navigation aid, assuring the maximum system flexibility. The high quality 2D-IR sensor is exploited by multiple parallel processing chains, based on linear and non-linear techniques, to extract the possible targets from background, in different conditions, with false alarm rate control. A widely tested track processor manages a large amount of candidate targets simultaneously and allows discriminating real targets from noise whilst operating with low target to background contrasts. The capability of providing reliable passive range estimation is an additional qualifying element of the system. Particular care has been dedicated to the detector non-uniformities, a possible limiting factor for distant targets detection, as well as to the design of the electro-optics for a harsh airborne environment. The system can be configured for LWIR or MWIR waveband according to the customer operational requirements. An embedded data recorder saves all the necessary images and data for mission debriefing, particularly useful during inflight system integration and tuning.
Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug
2011-05-01
Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.
A full field, 3-D velocimeter for microgravity crystallization experiments
NASA Technical Reports Server (NTRS)
Brodkey, Robert S.; Russ, Keith M.
1991-01-01
The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.
Muon trackers for imaging a nuclear reactor
NASA Astrophysics Data System (ADS)
Kume, N.; Miyadera, H.; Morris, C. L.; Bacon, J.; Borozdin, K. N.; Durham, J. M.; Fuzita, K.; Guardincerri, E.; Izumi, M.; Nakayama, K.; Saltus, M.; Sugita, T.; Takakura, K.; Yoshioka, K.
2016-09-01
A detector system for assessing damage to the cores of the Fukushima Daiichi nuclear reactors by using cosmic-ray muon tomography was developed. The system consists of a pair of drift-tube tracking detectors of 7.2× 7.2-m2 area. Each muon tracker consists of 6 x-layer and 6 y-layer drift-tube detectors. Each tracker is capable of measuring muon tracks with 12 mrad angular resolutions, and is capable of operating under 50-μ Sv/h radiation environment by removing gamma induced background with a novel time-coincidence logic. An estimated resolution to observe nuclear fuel debris at Fukushima Daiichi is 0.3 m when the core is imaged from outside the reactor building.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borot de Battisti, Maxence, E-mail: M.E.P.Borot@um
Purpose: The development of MR-guided high dose rate (HDR) brachytherapy is under investigation due to the excellent tumor and organs at risk visualization of MRI. However, MR-based localization of needles (including catheters or tubes) has inherently a low update rate and the required image interpretation can be hampered by signal voids arising from blood vessels or calcifications limiting the precision of the needle guidance and reconstruction. In this paper, a new needle tracking prototype is investigated using fiber Bragg gratings (FBG)-based sensing: this prototype involves a MR-compatible stylet composed of three optic fibers with nine sets of embedded FBG sensorsmore » each. This stylet can be inserted into brachytherapy needles and allows a fast measurement of the needle deflection. This study aims to assess the potential of FBG-based sensing for real-time needle (including catheter or tube) tracking during MR-guided intervention. Methods: First, the MR compatibility of FBG-based sensing and its accuracy was evaluated. Different known needle deflections were measured using FBG-based sensing during simultaneous MR-imaging. Then, a needle tracking procedure using FBG-based sensing was proposed. This procedure involved a MR-based calibration of the FBG-based system performed prior to the interventional procedure. The needle tracking system was assessed in an experiment with a moving phantom during MR imaging. The FBG-based system was quantified by comparing the gold-standard shapes, the shape manually segmented on MRI and the FBG-based measurements. Results: The evaluation of the MR compatibility of FBG-based sensing and its accuracy shows that the needle deflection could be measured with an accuracy of 0.27 mm on average. Besides, the FBG-based measurements were comparable to the uncertainty of MR-based measurements estimated at half the voxel size in the MR image. Finally, the mean(standard deviation) Euclidean distance between MR- and FBG-based needle position measurements was equal to 0.79 mm(0.37 mm). The update rate and latency of the FBG-based needle position measurement were 100 and 300 ms, respectively. Conclusions: The FBG-based needle tracking procedure proposed in this paper is able to determine the position of the complete needle, under MR-imaging, with better accuracy and precision, higher update rate, and lower latency compared to current MR-based needle localization methods. This system would be eligible for MR-guided brachytherapy, in particular, for an improved needle guidance and reconstruction.« less
Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.
Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao
2014-09-22
In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
Magnetic navigation for thoracic aortic stent-graft deployment using ultrasound image guidance.
Luo, Zhe; Cai, Junfeng; Wang, Su; Zhao, Qiang; Peters, Terry M; Gu, Lixu
2013-03-01
We propose a system for thoracic aortic stent-graft deployment that employs a magnetic tracking system (MTS) and intraoperative ultrasound (US). A preoperative plan is first performed using a general public utilities-accelerated cardiac modeling method to determine the target position of the stent-graft. During the surgery, an MTS is employed to track sensors embedded in the catheter, cannula, and the US probe, while a fiducial landmark based registration is used to map the patient's coordinate to the image coordinate. The surgical target is tracked in real time via a calibrated intraoperative US image. Under the guidance of the MTS integrated with the real-time US images, the stent-graft can be deployed to the target position without the use of ionizing radiation. This navigation approach was validated using both phantom and animal studies. In the phantom study, we demonstrate a US calibration accuracy of 1.5 ± 0.47 mm, and a deployment error of 1.4 ± 0.16 mm. In the animal study, we performed experiments on five porcine subjects and recorded fiducial, target, and deployment errors of 2.5 ± 0.32, 4.2 ± 0.78, and 2.43 ± 0.69 mm, respectively. These results demonstrate that delivery and deployment of thoracic stent-graft under MTS-guided navigation using US imaging is feasible and appropriate for clinical application.
Wei, Kuo-Chen; Lin, Feng-Wei; Huang, Chiung-Yin; Ma, Chen-Chi M; Chen, Ju-Yu; Feng, Li-Ying; Yang, Hung-Wei
To date, knowing how to identify the location of chemotherapeutic agents in the human body after injection is still a challenge. Therefore, it is urgent to develop a drug delivery system with molecular imaging tracking ability to accurately understand the distribution, location, and concentration of a drug in living organisms. In this study, we developed bovine serum albumin (BSA)-based nanoparticles (NPs) with dual magnetic resonance (MR) and fluorescence imaging modalities (fluorescein isothiocyanate [FITC]-BSA-Gd/1,3-bis(2-chloroethyl)-1-nitrosourea [BCNU] NPs) to deliver BCNU for inhibition of brain tumor cells (MBR 261-2). These BSA-based NPs are water dispersible, stable, and biocompatible as confirmed by XTT cell viability assay. In vitro phantoms and in vivo MR and fluorescence imaging experiments show that the developed FITC-BSA-Gd/BCNU NPs enable dual MR and fluorescence imaging for monitoring cellular uptake and distribution in tumors. The T1 relaxivity (R1) of FITC-BSA-Gd/BCNU NPs was 3.25 mM(-1) s(-1), which was similar to that of the commercial T1 contrast agent (R1 =3.36 mM(-1) s(-1)). The results indicate that this multifunctional drug delivery system has potential bioimaging tracking of chemotherapeutic agents ability in vitro and in vivo for cancer therapy.
Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.
Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian
2015-12-16
Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.
Reconstructing the flight kinematics of swarming and mating in wild mosquitoes
Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.
2012-01-01
We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212
A difference tracking algorithm based on discrete sine transform
NASA Astrophysics Data System (ADS)
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
NASA Astrophysics Data System (ADS)
Villano, Michelangelo; Papathanassiou, Konstantinos P.
2011-03-01
The estimation of the local differential shift between synthetic aperture radar (SAR) images has proven to be an effective technique for monitoring glacier surface motion. As images acquired over glaciers by short wavelength SAR systems, such as TerraSAR-X, often suffer from a lack of coherence, image features have to be exploited for the shift estimation (feature-tracking).The present paper addresses feature-tracking with special attention to the feasibility requirements and the achievable accuracy of the shift estimation. In particular, the dependence of the performance on image characteristics, such as texture parameters, signal-to-noise ratio (SNR) and resolution, as well as on processing techniques (despeckling, normalised cross-correlation versus maximum likelihood estimation) is analysed by means of Monte-Carlo simulations. TerraSAR-X data acquired over the Helheim glacier, Greenland, and the Aletsch glacier, Switzerland, have been processed to validate the simulation results.Feature-tracking can benefit of the availability of fully-polarimetric data. As some image characteristics, in fact, are polarisation-dependent, the selection of an optimum polarisation leads to improved performance. Furthermore, fully-polarimetric SAR images can be despeckled without degrading the resolution, so that additional (smaller-scale) features can be exploited.
Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging
NASA Astrophysics Data System (ADS)
Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.
2010-04-01
The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2015-03-01
In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.
Markerless motion estimation for motion-compensated clinical brain imaging
NASA Astrophysics Data System (ADS)
Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.
2018-05-01
Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kuangcai
The goal of this study is to help with future data analysis and experiment designs in rotational dynamics research using DIC-based SPORT technique. Most of the current studies using DIC-based SPORT techniques are technical demonstrations. Understanding the mechanisms behind the observed rotational behaviors of the imaging probes should be the focus of the future SPORT studies. More efforts are still needed in the development of new imaging probes, particle tracking methods, instrumentations, and advanced data analysis methods to further extend the potential of DIC-based SPORT technique.
Electromagnetic tracking in the clinical environment
Yaniv, Ziv; Wilson, Emmanuel; Lindisch, David; Cleary, Kevin
2009-01-01
When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system’s refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. PMID:19378748
Gómez-Villafuertes, Rosa; Paniagua-Herranz, Lucía; Gascon, Sergio; de Agustín-Durán, David; Ferreras, María de la O; Gil-Redondo, Juan Carlos; Queipo, María José; Menendez-Mendez, Aida; Pérez-Sen, Ráquel; Delicado, Esmerilda G; Gualix, Javier; Costa, Marcos R; Schroeder, Timm; Miras-Portugal, María Teresa; Ortega, Felipe
2017-12-16
Understanding the mechanisms that control critical biological events of neural cell populations, such as proliferation, differentiation, or cell fate decisions, will be crucial to design therapeutic strategies for many diseases affecting the nervous system. Current methods to track cell populations rely on their final outcomes in still images and they generally fail to provide sufficient temporal resolution to identify behavioral features in single cells. Moreover, variations in cell death, behavioral heterogeneity within a cell population, dilution, spreading, or the low efficiency of the markers used to analyze cells are all important handicaps that will lead to incomplete or incorrect read-outs of the results. Conversely, performing live imaging and single cell tracking under appropriate conditions represents a powerful tool to monitor each of these events. Here, a time-lapse video-microscopy protocol, followed by post-processing, is described to track neural populations with single cell resolution, employing specific software. The methods described enable researchers to address essential questions regarding the cell biology and lineage progression of distinct neural populations.
Ong, Lee-Ling S; Xinghua Zhang; Kundukad, Binu; Dauwels, Justin; Doyle, Patrick; Asada, H Harry
2016-08-01
An approach to automatically detect bacteria division with temporal models is presented. To understand how bacteria migrate and proliferate to form complex multicellular behaviours such as biofilms, it is desirable to track individual bacteria and detect cell division events. Unlike eukaryotic cells, prokaryotic cells such as bacteria lack distinctive features, causing bacteria division difficult to detect in a single image frame. Furthermore, bacteria may detach, migrate close to other bacteria and may orientate themselves at an angle to the horizontal plane. Our system trains a hidden conditional random field (HCRF) model from tracked and aligned bacteria division sequences. The HCRF model classifies a set of image frames as division or otherwise. The performance of our HCRF model is compared with a Hidden Markov Model (HMM). The results show that a HCRF classifier outperforms a HMM classifier. From 2D bright field microscopy data, it is a challenge to separate individual bacteria and associate observations to tracks. Automatic detection of sequences with bacteria division will improve tracking accuracy.
Automated identification and tracking of polar-cap plasma patches at solar minimum
NASA Astrophysics Data System (ADS)
Burston, R.; Hodges, K.; Astin, I.; Jayachandran, P. T.
2014-03-01
A method of automatically identifying and tracking polar-cap plasma patches, utilising data inversion and feature-tracking methods, is presented. A well-established and widely used 4-D ionospheric imaging algorithm, the Multi-Instrument Data Assimilation System (MIDAS), inverts slant total electron content (TEC) data from ground-based Global Navigation Satellite System (GNSS) receivers to produce images of the free electron distribution in the polar-cap ionosphere. These are integrated to form vertical TEC maps. A flexible feature-tracking algorithm, TRACK, previously used extensively in meteorological storm-tracking studies is used to identify and track maxima in the resulting 2-D data fields. Various criteria are used to discriminate between genuine patches and "false-positive" maxima such as the continuously moving day-side maximum, which results from the Earth's rotation rather than plasma motion. Results for a 12-month period at solar minimum, when extensive validation data are available, are presented. The method identifies 71 separate structures consistent with patch motion during this time. The limitations of solar minimum and the consequent small number of patches make climatological inferences difficult, but the feasibility of the method for patches larger than approximately 500 km in scale is demonstrated and a larger study incorporating other parts of the solar cycle is warranted. Possible further optimisation of discrimination criteria, particularly regarding the definition of a patch in terms of its plasma concentration enhancement over the surrounding background, may improve results.
Sensors management in robotic neurosurgery: the ROBOCAST project.
Vaccarella, Alberto; Comparetti, Mirko Daniele; Enquobahrie, Andinet; Ferrigno, Giancarlo; De Momi, Elena
2011-01-01
Robot and computer-aided surgery platforms bring a variety of sensors into the operating room. These sensors generate information to be synchronized and merged for improving the accuracy and the safety of the surgical procedure for both patients and operators. In this paper, we present our work on the development of a sensor management architecture that is used is to gather and fuse data from localization systems, such as optical and electromagnetic trackers and ultrasound imaging devices. The architecture follows a modular client-server approach and was implemented within the EU-funded project ROBOCAST (FP7 ICT 215190). Furthermore it is based on very well-maintained open-source libraries such as OpenCV and Image-Guided Surgery Toolkit (IGSTK), which are supported from a worldwide community of developers and allow a significant reduction of software costs. We conducted experiments to evaluate the performance of the sensor manager module. We computed the response time needed for a client to receive tracking data or video images, and the time lag between synchronous acquisition with an optical tracker and ultrasound machine. Results showed a median delay of 1.9 ms for a client request of tracking data and about 40 ms for US images; these values are compatible with the data generation rate (20-30 Hz for tracking system and 25 fps for PAL video). Simultaneous acquisitions have been performed with an optical tracking system and US imaging device: data was aligned according to the timestamp associated with each sample and the delay was estimated with a cross-correlation study. A median value of 230 ms delay was calculated showing that realtime 3D reconstruction is not feasible (an offline temporal calibration is needed), although a slow exploration is possible. In conclusion, as far as asleep patient neurosurgery is concerned, the proposed setup is indeed useful for registration error correction because the brain shift occurs with a time constant of few tens of minutes.
NASA Astrophysics Data System (ADS)
Pattke, Marco; Martin, Manuel; Voit, Michael
2017-05-01
Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.
Getting the Bigger Picture With Digital Surveillance
NASA Technical Reports Server (NTRS)
2002-01-01
Through a Space Act Agreement, Diebold, Inc., acquired the exclusive rights to Glenn Research Center's patented video observation technology, originally designed to accelerate video image analysis for various ongoing and future space applications. Diebold implemented the technology into its AccuTrack digital, color video recorder, a state-of- the-art surveillance product that uses motion detection for around-the- clock monitoring. AccuTrack captures digitally signed images and transaction data in real-time. This process replaces the onerous tasks involved in operating a VCR-based surveillance system, and subsequently eliminates the need for central viewing and tape archiving locations altogether. AccuTrack can monitor an entire bank facility, including four automated teller machines, multiple teller lines, and new account areas, all from one central location.
Using LabView for real-time monitoring and tracking of multiple biological objects
NASA Astrophysics Data System (ADS)
Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika
2017-04-01
Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.
Kinect based real-time position calibration for nasal endoscopic surgical navigation system
NASA Astrophysics Data System (ADS)
Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian
2016-03-01
Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.
Jaafar, Haryati; Ibrahim, Salwani; Ramli, Dzati Athiar
2015-01-01
Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%. PMID:26113861
Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik
2010-10-01
Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
Poludniowski, Gavin; Webb, Steve; Evans, Philip M
2012-03-01
Artifacts in treatment-room cone-beam reconstructions have been observed at the authors' center when cone-beam acquisition is simultaneous with radio frequency (RF) transponder tracking using the Calypso 4D system (Calypso Medical, Seattle, WA). These artifacts manifest as CT-number modulations and increased CT-noise. The authors present a method for the suppression of the artifacts. The authors propose a three-stage postprocessing technique that can be applied to image volumes previously reconstructed by a cone-beam system. The stages are (1) segmentation of voxels into air, soft-tissue, and bone; (2) application of a 2D spatial-filter in the axial plane to the soft-tissue voxels; and (3) normalization to remove streaking along the axial-direction. The algorithm was tested on patient data acquired with Synergy XVI cone-beam CT systems (Elekta, Crawley, United Kingdom). The computational demands of the suggested correction are small, taking less than 15 s per cone-beam reconstruction on a desktop PC. For a moderate loss of spatial-resolution, the artifacts are strongly suppressed and low-contrast visibility is improved. The correction technique proposed is fast and effective in removing the artifacts caused by simultaneous cone-beam imaging and RF-transponder tracking.
Design, manufacturing and testing of a four-mirror telescope with a wide field of view
NASA Astrophysics Data System (ADS)
Gloesener, P.; Wolfs, F.; Lemagne, F.; Cola, M.; Flebus, C.; Blanchard, G.; Kirschner, V.
2017-11-01
Regarding Earth observation missions, it has become unnecessary to point out the importance of making available wide field of view optical instruments for the purpose of spectral imaging. Taking advantage of the pushbroom instrument concept with its linear field across the on-ground track, it is in particular relevant to consider front-end optical configurations that involve an all-reflective system presenting inherent and dedicated advantages such as achromaticity, unobscuration and compactness, while ensuring the required image quality over the whole field. The attractiveness of the concept must be balanced with respect to the state-of-the-art mirror manufacturing technologies as the need for fast, broadband and wide field systems increases the constraints put on the feasibility of each individual component. As part of an ESTEC contract, AMOS designed, manufactured and tested a breadboard of a four-mirror wide field telescope for typical Earth observation superspectral missions. The initial purpose of the development was to assess the feasibility of a telecentric spaceborne three-mirror system covering an unobscured rectangular field of view of 26 degrees across track (ACT) by 6 degrees along track (ALT) with a f-number of 3.5 and a focal length of 500 mm and presenting an overall image quality better than 100 nm RMS wavefront error within the whole field.
Accurate and ergonomic method of registration for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Henderson, Jaimie M.; Bucholz, Richard D.
1994-05-01
There has been considerable interest in the development of frameless stereotaxy based upon scalp mounted fiducials. In practice we have experienced difficulty in relating markers to the image data sets in our series of 25 frameless cases, as well as inaccuracy due to scalp movement and the size of the markers. We have developed an alternative system for accurately and conveniently achieving surgical registration for image-guided neurosurgery based on alignment and matching of patient forehead contours. The system consists of a laser contour digitizer which is used in the operating room to acquire forehead contours, editing software for extracting contours from patient image data sets, and a contour-match algorithm for aligning the two contours and performing data set registration. The contour digitizer is tracked by a camera array which relates its position with respect to light emitting diodes placed on the head clamp. Once registered, surgical instrument can be tracked throughout the procedure. Contours can be extracted from either CT or MRI image datasets. The system has proven to be robust in the laboratory setting. Overall error of registration is 1 - 2 millimeters in routine use. Image to patient registration can therefore be achieved quite easily and accurately, without the need for fixation of external markers to the skull, or manually finding markers on the scalp and image datasets. The system is unobtrusive and imposes little additional effort on the neurosurgeon, broadening the appeal of image-guided surgery.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Aguzzi, Jacopo; Sbragaglia, Valerio; Sarriá, David; García, José Antonio; Costa, Corrado; del Río, Joaquín; Mànuel, Antoni; Menesatti, Paolo; Sardà, Francesc
2011-01-01
Radio frequency identification (RFID) devices are currently used to quantify several traits of animal behaviour with potential applications for the study of marine organisms. To date, behavioural studies with marine organisms are rare because of the technical difficulty of propagating radio waves within the saltwater medium. We present a novel RFID tracking system to study the burrowing behaviour of a valuable fishery resource, the Norway lobster (Nephrops norvegicus L.). The system consists of a network of six controllers, each handling a group of seven antennas. That network was placed below a microcosm tank that recreated important features typical of Nephrops' grounds, such as the presence of multiple burrows. The animals carried a passive transponder attached to their telson, operating at 13.56 MHz. The tracking system was implemented to concurrently report the behaviour of up to three individuals, in terms of their travelled distances in a specified unit of time and their preferential positioning within the antenna network. To do so, the controllers worked in parallel to send the antenna data to a computer via a USB connection. The tracking accuracy of the system was evaluated by concurrently recording the animals' behaviour with automated video imaging. During the two experiments, each lasting approximately one week, two different groups of three animals each showed a variable burrow occupancy and a nocturnal displacement under a standard photoperiod regime (12 h light:12 h dark), measured using the RFID method. Similar results were obtained with the video imaging. Our implemented RFID system was therefore capable of efficiently tracking the tested organisms and has a good potential for use on a wide variety of other marine organisms of commercial, aquaculture, and ecological interest.
Aguzzi, Jacopo; Sbragaglia, Valerio; Sarriá, David; García, José Antonio; Costa, Corrado; del Río, Joaquín; Mànuel, Antoni; Menesatti, Paolo; Sardà, Francesc
2011-01-01
Radio frequency identification (RFID) devices are currently used to quantify several traits of animal behaviour with potential applications for the study of marine organisms. To date, behavioural studies with marine organisms are rare because of the technical difficulty of propagating radio waves within the saltwater medium. We present a novel RFID tracking system to study the burrowing behaviour of a valuable fishery resource, the Norway lobster (Nephrops norvegicus L.). The system consists of a network of six controllers, each handling a group of seven antennas. That network was placed below a microcosm tank that recreated important features typical of Nephrops’ grounds, such as the presence of multiple burrows. The animals carried a passive transponder attached to their telson, operating at 13.56 MHz. The tracking system was implemented to concurrently report the behaviour of up to three individuals, in terms of their travelled distances in a specified unit of time and their preferential positioning within the antenna network. To do so, the controllers worked in parallel to send the antenna data to a computer via a USB connection. The tracking accuracy of the system was evaluated by concurrently recording the animals’ behaviour with automated video imaging. During the two experiments, each lasting approximately one week, two different groups of three animals each showed a variable burrow occupancy and a nocturnal displacement under a standard photoperiod regime (12 h light:12 h dark), measured using the RFID method. Similar results were obtained with the video imaging. Our implemented RFID system was therefore capable of efficiently tracking the tested organisms and has a good potential for use on a wide variety of other marine organisms of commercial, aquaculture, and ecological interest. PMID:22163710
Detection technique of targets for missile defense system
NASA Astrophysics Data System (ADS)
Guo, Hua-ling; Deng, Jia-hao; Cai, Ke-rong
2009-11-01
Ballistic missile defense system (BMDS) is a weapon system for intercepting enemy ballistic missiles. It includes ballistic-missile warning system, target discrimination system, anti-ballistic-missile guidance systems, and command-control communication system. Infrared imaging detection and laser imaging detection are widely used in BMDS for surveillance, target detection, target tracking, and target discrimination. Based on a comprehensive review of the application of target-detection techniques in the missile defense system, including infrared focal plane arrays (IRFPA), ground-based radar detection technology, 3-dimensional imaging laser radar with a photon counting avalanche photodiode (APD) arrays and microchip laser, this paper focuses on the infrared and laser imaging detection techniques in missile defense system, as well as the trends for their future development.
Solar Storms, Devils, Dunes, and Gullies
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Released 12 December 2003Man, there sure is a lot going on here! This image was acquired during the peak of the late October record breaking solar storm outbursts. The white dots in this image were in fact caused when the charged particles from the sun hit our camera. One can also see the enigmatic gullies, dark barchan sand dunes and numerous dust devil tracks. This image is in the Noachis region of the heavily cratered southern hemisphere.Image information: VIS instrument. Latitude -42.1, Longitude 328.2 East (31.8 West). 19 meter/pixel resolution.Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.