Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Face pose tracking using the four-point algorithm
NASA Astrophysics Data System (ADS)
Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen
2017-06-01
In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.
Technology survey on video face tracking
NASA Astrophysics Data System (ADS)
Zhang, Tong; Gomes, Herman Martins
2014-03-01
With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2016-05-01
We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
NASA Astrophysics Data System (ADS)
Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert
2013-06-01
The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.
Real-time camera-based face detection using a modified LAMSTAR neural network system
NASA Astrophysics Data System (ADS)
Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.
2003-03-01
This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
Efficient human face detection in infancy.
Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A
2016-01-01
Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.
Webcam mouse using face and eye tracking in various illumination environments.
Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng
2005-01-01
Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.
Driver face tracking using semantics-based feature of eyes on single FPGA
NASA Astrophysics Data System (ADS)
Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming
2017-06-01
Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
A Fuzzy Aproach For Facial Emotion Recognition
NASA Astrophysics Data System (ADS)
Gîlcă, Gheorghe; Bîzdoacă, Nicu-George
2015-09-01
This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.
An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera
NASA Astrophysics Data System (ADS)
Kumar, K. S. Chidanand; Bhowmick, Brojeshwar
A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.
Three-dimensional face pose detection and tracking using monocular videos: tool and application.
Dornaika, Fadi; Raducanu, Bogdan
2009-08-01
Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
Directional templates for real-time detection of coronal axis rotated faces
NASA Astrophysics Data System (ADS)
Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio
2004-10-01
Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
Motion correction for improved estimation of heart rate using a visual spectrum camera
NASA Astrophysics Data System (ADS)
Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki
2017-05-01
Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.
Tracking the truth: the effect of face familiarity on eye fixations during deception.
Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert
2017-05-01
In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.
Enhancing the performance of cooperative face detector by NFGS
NASA Astrophysics Data System (ADS)
Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba
2015-07-01
Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.
Assessing the performance of a motion tracking system based on optical joint transform correlation
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.
2015-08-01
We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.
Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences
Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong
2016-01-01
Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514
Multisensor-based human detection and tracking for mobile service robots.
Bellotto, Nicola; Hu, Huosheng
2009-02-01
One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.
NASA Astrophysics Data System (ADS)
Morishima, Shigeo; Nakamura, Satoshi
2004-12-01
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing
2009-02-01
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Single-Molecule Tracking and Its Application in Biomolecular Binding Detection.
Liu, Cong; Liu, Yen-Liang; Perillo, Evan P; Dunn, Andrew K; Yeh, Hsin-Chih
2016-01-01
In the past two decades significant advances have been made in single-molecule detection, which enables the direct observation of single biomolecules at work in real time and under physiological conditions. In particular, the development of single-molecule tracking (SMT) microscopy allows us to monitor the motion paths of individual biomolecules in living systems, unveiling the localization dynamics and transport modalities of the biomolecules that support the development of life. Beyond the capabilities of traditional camera-based tracking techniques, state-of-the-art SMT microscopies developed in recent years can record fluorescence lifetime while tracking a single molecule in the 3D space. This multiparameter detection capability can open the door to a wide range of investigations at the cellular or tissue level, including identification of molecular interaction hotspots and characterization of association/dissociation kinetics between molecules. In this review, we discuss various SMT techniques developed to date, with an emphasis on our recent development of the next generation 3D tracking system that not only achieves ultrahigh spatiotemporal resolution but also provides sufficient working depth suitable for live animal imaging. We also discuss the challenges that current SMT techniques are facing and the potential strategies to tackle those challenges.
Single-Molecule Tracking and Its Application in Biomolecular Binding Detection
Liu, Cong; Liu, Yen-Liang; Perillo, Evan P.; Dunn, Andrew K.; Yeh, Hsin-Chih
2016-01-01
In the past two decades significant advances have been made in single-molecule detection, which enables the direct observation of single biomolecules at work in real time and under physiological conditions. In particular, the development of single-molecule tracking (SMT) microscopy allows us to monitor the motion paths of individual biomolecules in living systems, unveiling the localization dynamics and transport modalities of the biomolecules that support the development of life. Beyond the capabilities of traditional camera-based tracking techniques, state-of-the-art SMT microscopies developed in recent years can record fluorescence lifetime while tracking a single molecule in the 3D space. This multiparameter detection capability can open the door to a wide range of investigations at the cellular or tissue level, including identification of molecular interaction hotspots and characterization of association/dissociation kinetics between molecules. In this review, we discuss various SMT techniques developed to date, with an emphasis on our recent development of the next generation 3D tracking system that not only achieves ultrahigh spatiotemporal resolution but also provides sufficient working depth suitable for live animal imaging. We also discuss the challenges that current SMT techniques are facing and the potential strategies to tackle those challenges. PMID:27660404
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Optical Tracker For Longwall Coal Shearer
NASA Technical Reports Server (NTRS)
Poulsen, Peter D.; Stein, Richard J.; Pease, Robert E.
1989-01-01
Photographic record yields information for correction of vehicle path. Tracking system records lateral movements of longwall coal-shearing vehicle. System detects lateral and vertical deviations of path of vehicle moving along coal face, shearing coal as it goes. Rides on rails in mine tunnel, advancing on toothed track in one of rails. As vehicle moves, retroreflective mirror rides up and down on teeth, providing series of pulsed reflections to film recorder. Recorded positions of pulses, having horizontal and vertical orientations, indicate vertical and horizontal deviations, respectively, of vehicle.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2017-01-01
Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Barhen, Jacob; Glover, Charles Wayne
2012-01-01
Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.
NORTH SIDE FACING TRACK, SHOWING ELECTRICAL BOX AND CONCRETE VAULT ...
NORTH SIDE FACING TRACK, SHOWING ELECTRICAL BOX AND CONCRETE VAULT - Edwards Air Force Base, South Base Sled Track, Electrical Distribution Station, South side of Sled Track, Lancaster, Los Angeles County, CA
Koene, S; Timmermans, J; Weijers, G; de Laat, P; de Korte, C L; Smeitink, J A M; Janssen, M C H; Kapusta, L
2017-03-01
Cardiomyopathy is a common complication of mitochondrial disorders, associated with increased mortality. Two dimensional speckle tracking echocardiography (2DSTE) can be used to quantify myocardial deformation. Here, we aimed to determine the usefulness of 2DSTE in detecting and monitoring subtle changes in myocardial dysfunction in carriers of the 3243A>G mutation in mitochondrial DNA. In this retrospective pilot study, 30 symptomatic and asymptomatic carriers of the mitochondrial 3243A>G mutation of whom two subsequent echocardiograms were available were included. We measured longitudinal, circumferential and radial strain using 2DSTE. Results were compared to published reference values. Speckle tracking was feasible in 90 % of the patients for longitudinal strain. Circumferential and radial strain showed low face validity (low number of images with sufficient quality; suboptimal tracking) and were therefore rejected for further analysis. Global longitudinal strain showed good face validity, and was abnormal in 56-70 % (depending on reference values used) of the carriers (n = 27). Reproducibility was good (mean difference of 0.83 for inter- and 0.40 for intra-rater reproducibility; ICC 0.78 and 0.89, respectively). The difference between the first and the second measurement exceeded the measurement variance in 39 % of the cases (n = 23; feasibility of follow-up 77 %). Even in data collected as part of clinical care, two-dimensional strain echocardiography seems a feasible method to detect and monitor subtle changes in longitudinal myocardial deformation in adult carriers of the mitochondrial 3243A>G mutation. Based on our data and the reported accuracy of global longitudinal strain in other studies, we suggest the use of global longitudinal strain in a prospective follow-up or intervention study.
How facial attractiveness affects sustained attention.
Li, Jie; Oksama, Lauri; Hyönä, Jukka
2016-10-01
The present study investigated whether and how facial attractiveness affects sustained attention. We adopted a multiple-identity tracking paradigm, using attractive and unattractive faces as stimuli. Participants were required to track moving target faces amid distractor faces and report the final location of each target. In Experiment 1, the attractive and unattractive faces differed in both the low-level properties (i.e., luminance, contrast, and color saturation) and high-level properties (i.e., physical beauty and age). The results showed that the attractiveness of both the target and distractor faces affected the tracking performance: The attractive target faces were tracked better than the unattractive target faces; when the targets and distractors were both unattractive male faces, the tracking performance was poorer than when they were of different attractiveness. In Experiment 2, the low-level properties of the facial images were equalized. The results showed that the attractive target faces were still tracked better than unattractive targets while the effects related to distractor attractiveness ceased to exist. Taken together, the results indicate that during attentional tracking the high-level properties related to the attractiveness of the target faces can be automatically processed, and then they can facilitate the sustained attention on the attractive targets, either with or without the supplement of low-level properties. On the other hand, only low-level properties of the distractor faces can be processed. When the distractors share similar low-level properties with the targets, they can be grouped together, so that it would be more difficult to sustain attention on the individual targets. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
3. NORTH FRONT, BULLET GLASS OBSERVATION WINDOWS FACE SLED TRACK. ...
3. NORTH FRONT, BULLET GLASS OBSERVATION WINDOWS FACE SLED TRACK. - Edwards Air Force Base, South Base Sled Track, Instrumentation & Control Building, South of Sled Track, Station "50" area, Lancaster, Los Angeles County, CA
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.
2016-03-01
Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.
Deng, Z. Daniel; Weiland, Mark A.; Fu, Tao; Seim, Tom A.; LaMarche, Brian L.; Choi, Eric Y.; Carlson, Thomas J.; Eppard, M. Brad
2011-01-01
In Part 1 of this paper, we presented the engineering design and instrumentation of the Juvenile Salmon Acoustic Telemetry System (JSATS) cabled system, a nonproprietary sensing technology developed by the U.S. Army Corps of Engineers, Portland District (Oregon, USA) to meet the needs for monitoring the survival of juvenile salmonids through the hydroelectric facilities within the Federal Columbia River Power System. Here in Part 2, we describe how the JSATS cabled system was employed as a reference sensor network for detecting and tracking juvenile salmon. Time-of-arrival data for valid detections on four hydrophones were used to solve for the three-dimensional (3D) position of fish surgically implanted with JSATS acoustic transmitters. Validation tests demonstrated high accuracy of 3D tracking up to 100 m upstream from the John Day Dam spillway. The along-dam component, used for assigning the route of fish passage, had the highest accuracy; the median errors ranged from 0.02 to 0.22 m, and root mean square errors ranged from 0.07 to 0.56 m at distances up to 100 m. For the 2008 case study at John Day Dam, the range for 3D tracking was more than 100 m upstream of the dam face where hydrophones were deployed, and detection and tracking probabilities of fish tagged with JSATS acoustic transmitters were higher than 98%. JSATS cabled systems have been successfully deployed on several major dams to acquire information for salmon protection and for development of more “fish-friendly” hydroelectric facilities. PMID:22163919
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Zhiqun; Weiland, Mark A.; Fu, Tao
2011-05-26
In Part 1 of this paper [1], we presented the engineering design and instrumentation of the Juvenile Salmon Acoustic Telemetry System (JSATS) cabled system, a nonproprietary technology developed by the U.S. Army Corps of Engineers, Portland District, to meet the needs for monitoring the survival of juvenile salmonids through the 31 dams in the Federal Columbia River Power System. Here in Part 2, we describe how the JSATS cabled system was employed as a reference sensor network for detecting and tracking juvenile salmon. Time-of-arrival data for valid detections on four hydrophones were used to solve for the three-dimensional (3D) positionmore » of fish surgically implanted with JSATS acoustic transmitters. Validation tests demonstrated high accuracy of 3D tracking up to 100 m from the John Day Dam spillway. The along-dam component, used for assigning the route of fish passage, had the highest accuracy; the median errors ranged from 0.06 to 0.22 m, and root mean square errors ranged from 0.05 to 0.56 m at distances up to 100 m. For the case study at John Day Dam during 2008, the range for 3D tracking was more than 100 m upstream of the dam face where hydrophones were deployed, and detection and tracking probabilities of fish tagged with JSATS acoustic transmitters were higher than 98%. JSATS cabled systems have been successfully deployed on several major dams to acquire information for salmon protection and for development of more “fish-friendly” hydroelectric facilities.« less
Non-intrusive head movement analysis of videotaped seizures of epileptic origin.
Mandal, Bappaditya; Eng, How-Lung; Lu, Haiping; Chan, Derrick W S; Ng, Yen-Ling
2012-01-01
In this work we propose a non-intrusive video analytic system for patient's body parts movement analysis in Epilepsy Monitoring Unit. The system utilizes skin color modeling, head/face pose template matching and face detection to analyze and quantify the head movements. Epileptic patients' heads are analyzed holistically to infer seizure and normal random movements. The patient does not require to wear any special clothing, markers or sensors, hence it is totally non-intrusive. The user initializes the person-specific skin color and selects few face/head poses in the initial few frames. The system then tracks the head/face and extracts spatio-temporal features. Support vector machines are then used on these features to classify seizure-like movements from normal random movements. Experiments are performed on numerous long hour video sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
NASA Astrophysics Data System (ADS)
Myint, L. M. M.; Warisarn, C.
2017-05-01
Two-dimensional (2-D) interference is one of the prominent challenges in ultra-high density recording system such as bit patterned media recording (BPMR). The multi-track joint 2-D detection technique with the help of the array-head reading can tackle this problem effectively by jointly processing the multiple readback signals from the adjacent tracks. Moreover, it can robustly alleviate the impairments due to track mis-registration (TMR) and media noise. However, the computational complexity of such detectors is normally too high and hard to implement in a reality, even for a few multiple tracks. Therefore, in this paper, we mainly focus on reducing the complexity of multi-track joint 2-D Viterbi detector without paying a large penalty in terms of the performance. We propose a simplified multi-track joint 2-D Viterbi detector with a manageable complexity level for the BPMR's multi-track multi-head (MTMH) system. In the proposed method, the complexity of detector's trellis is reduced with the help of the joint-track equalization method which employs 1-D equalizers and 2-D generalized partial response (GPR) target. Moreover, we also examine the performance of a full-fledged multi-track joint 2-D detector and the conventional 2-D detection. The results show that the simplified detector can perform close to the full-fledge detector, especially when the system faces high media noise, with the significant low complexity.
NASA Astrophysics Data System (ADS)
Kassim, Muhammad Fuad bin; Norzali Haji Mohd, Mohd
2017-08-01
Technology is all about helping people, which created a new opportunity to take serious action in managing their health care. Moreover, Obesity continues to be a serious public health concern in the Malaysia and continuing to rise. Obesity has been a serious health concern among people. Nearly half of Malaysian people overweight. Most of dietary approach is not tracking and detecting the right calorie intake for weight loss, but currently used tools such as food diaries require users to manually record and track the food calories, making them difficult for daily use. We will be developing a new tool that counts the food intake bite by monitoring hand gesture and face jaw motion movement of caloric intake. The Bite count method showed a good significant that can lead to a successful weight loss by simply monitoring the bite taken during eating. The device used was Kinect Xbox One which used a depth camera to detect the motion on person hand and face during food intake. Previous studies showed that most of the method used to count bite device is worn type. The recent trend is now going towards non-wearable devices due to the difficulty when wearing devices and it has high false alarm ratio. The proposed system gets data from the Kinect that will be monitoring the hand and face gesture of the user while eating. Then, the gesture of hand and face data is sent to the microcontroller board to recognize and start counting bite taken by the user. The system recognizes the patterns of bite taken from user by following the algorithm of basic eating type either using hand or chopstick. This system can help people who are trying to follow a proper way to reduce overweight or eating disorders by monitoring their meal intake and controlling eating rate.
An efficient method for facial component detection in thermal images
NASA Astrophysics Data System (ADS)
Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen
2015-04-01
A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.
A novel thermal face recognition approach using face pattern words
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2010-04-01
A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e
An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring.
Zhao, Yifan; Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros
2017-11-22
Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers' behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
Reduced specificity in emotion judgment in people with autism spectrum disorder
Wang, Shuo; Adolphs, Ralph
2017-01-01
There is a conflicting literature on facial emotion processing in autism spectrum disorder (ASD): both typical and atypical performance have been reported, and inconsistencies in the literature may stem from different processes examined (emotion judgment, face perception, fixations) as well as differences in participant populations. Here we conducted a detailed investigation of the ability to discriminate graded emotions shown in morphs of fear-happy faces, in a well-characterized high-functioning sample of participants with ASD and matched controls. Signal detection approaches were used in the analyses, and concurrent high-resolution eye-tracking was collected. Although people with ASD had typical thresholds for categorical fear and confidence judgments, their psychometric specificity to detect emotions across the entire range of intensities was reduced. However, fixation patterns onto the stimuli were typical and could not account for the reduced specificity of emotion judgment. Together, our results argue for a subtle and specific deficit in emotion perception in ASD that, from a signal detection perspective, is best understood as a reduced specificity due to increased noise in central processing of the face stimuli. PMID:28343960
Driver fatigue detection based on eye state.
Lin, Lizong; Huang, Chao; Ni, Xiaopeng; Wang, Jiawen; Zhang, Hao; Li, Xiao; Qian, Zhiqin
2015-01-01
Nowadays, more and more traffic accidents occur because of driver fatigue. In order to reduce and prevent it, in this study, a calculation method using PERCLOS (percentage of eye closure time) parameter characteristics based on machine vision was developed. It determined whether a driver's eyes were in a fatigue state according to the PERCLOS value. The overall workflow solutions included face detection and tracking, detection and location of the human eye, human eye tracking, eye state recognition, and driver fatigue testing. The key aspects of the detection system incorporated the detection and location of human eyes and driver fatigue testing. The simplified method of measuring the PERCLOS value of the driver was to calculate the ratio of the eyes being open and closed with the total number of frames for a given period. If the eyes were closed more than the set threshold in the total number of frames, the system would alert the driver. Through many experiments, it was shown that besides the simple detection algorithm, the rapid computing speed, and the high detection and recognition accuracies of the system, the system was demonstrated to be in accord with the real-time requirements of a driver fatigue detection system.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
The Role of Face Familiarity in Eye Tracking of Faces by Individuals with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Sterling, Lindsey; Dawson, Geraldine; Webb, Sara; Murias, Michael; Munson, Jeffrey; Panagiotides, Heracles; Aylward, Elizabeth
2008-01-01
It has been shown that individuals with autism spectrum disorders (ASD) demonstrate normal activation in the fusiform gyrus when viewing familiar, but not unfamiliar faces. The current study utilized eye tracking to investigate patterns of attention underlying familiar versus unfamiliar face processing in ASD. Eye movements of 18 typically…
ERIC Educational Resources Information Center
Riby, Deborah M.; Hancock, Peter J. B.
2009-01-01
The neuro-developmental disorders of Williams syndrome (WS) and autism can reveal key components of social cognition. Eye-tracking techniques were applied in two tasks exploring attention to pictures containing faces. Images were (i) scrambled pictures containing faces or (ii) pictures of scenes with embedded faces. Compared to individuals who…
An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring
Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros
2017-01-01
Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers’ behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone. PMID:29165331
Unconstrained face detection and recognition based on RGB-D camera for the visually impaired
NASA Astrophysics Data System (ADS)
Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian
2017-02-01
It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
SU-D-BRA-02: Motion Assessment During Open Face Mask SRS Using CBCT and Surface Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, BB; Fox, CJ; Hartford, AC
Purpose: To assess the robustness of immobilization using open-face mask technology for linac-based stereotactic radiosurgery (SRS) with multiple non-coplanar arcs via repeated CBCT acquisition, with comparison to contemporaneous optical surface tracking data. Methods: 25 patients were treated in open faced masks with cranial SRS using 3–4 non-coplanar arcs. Repeated CBCT imaging was performed to verify the maintenance of proper patient positioning during treatment. Initial patient positioning was performed based on prescribed shifts and optical surface tracking. Positioning refinements employed rigid 3D-matching of the planning CT and CBCT images and were implemented via automated 6DOF couch control. CBCT imaging was repeatedmore » following the treatment of all non-transverse beams with associated couch kicks. Detected patient translations and rotations were recorded and automatically corrected. Optical surface tracking was applied throughout the treatments to monitor motion, and this contemporaneous patient positioning data was recorded to compare against CBCT data and 6DOF couch adjustments. Results: Initial patient positions were refined on average by translations of 3±1mm and rotations of ±0.9-degrees. Optical surface tracking corroborated couch corrections to within 1±1mm and ±0.4-degrees. Following treatment of the transverse and subsequent superior-oblique beam, average translations of 0.6±0.4mm and rotations of ±0.4-degrees were reported via CBCT, with optical surface tracking in agreement to within 1.1±0.6mm and ±0.6-degrees. Following treatment of the third beam, CBCT indicated additional translations of 0.4±0.2mm and rotations of ±0.3-degrees. Cumulative couch corrections resulted in 0.7 ± 0.4mm average magnitude translations and rotations of ±0.4-degrees. Conclusion: Based on CBCT measurements of patients during SRS, the open face mask maintained patient positioning to within 1.5mm and 1-degree with >95% confidence. Patient positioning determined by optical surface tracking agreed with CBCT assessment to within 1±1mm and ±0.6-degree rotations. These data support the use of 1–2mm PTV margins and repeated CBCT to maintain stereotactic positioning tolerances.« less
Wagner, Jennifer B.; Hirsch, Suzanna B.; Vogel-Farley, Vanessa K.; Redcay, Elizabeth; Nelson, Charles A.
2014-01-01
Individuals with autism spectrum disorder (ASD) often have difficulty with social-emotional cues. This study examined the neural, behavioral, and autonomic correlates of emotional face processing in adolescents with ASD and typical development (TD) using eye-tracking and event-related potentials (ERPs) across two different paradigms. Scanning of faces was similar across groups in the first task, but the second task found that face-sensitive ERPs varied with emotional expressions only in TD. Further, ASD showed enhanced neural responding to non-social stimuli. In TD only, attention to eyes during eye-tracking related to faster face-sensitive ERPs in a separate task; in ASD, a significant positive association was found between autonomic activity and attention to mouths. Overall, ASD showed an atypical pattern of emotional face processing, with reduced neural differentiation between emotions and a reduced relationship between gaze behavior and neural processing of faces. PMID:22684525
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Laser heterodyne surface profiler
Sommargren, G.E.
1980-06-16
A method and apparatus are disclosed for testing the deviation of the face of an object from a flat smooth surface using a beam of coherent light of two plane-polarized components, one of a frequency constantly greater than the other by a fixed amount to produce a difference frequency with a constant phase to be used as a reference, and splitting the beam into its two components. The separate components are directed onto spaced apart points on the face of the object to be tested for smoothness while the face of the object is rotated on an axis normal to one point, thereby passing the other component over a circular track on the face of the object. The two components are recombined after reflection to produce a reflected frequency difference of a phase proportional to the difference in path length of one component reflected from one point to the other component reflected from the other point. The phase of the reflected frequency difference is compared with the reference phase to produce a signal proportional to the deviation of the height of the surface along the circular track with respect to the fixed point at the center, thereby to produce a signal that is plotted as a profile of the surface along the circular track. The phase detector includes a quarter-wave plate to convert the components of the reference beam into circularly polarized components, a half-wave plate to shift the phase of the circularly polarized components, and a polarizer to produce a signal of a shifted phase for comparison with the phase of the frequency difference of the reflected components detected through a second polarizer. Rotation of the half-wave plate can be used for phase adjustment over a full 360/sup 0/ range.
Interior detail of main entry with railroad tracks; camera facing ...
Interior detail of main entry with railroad tracks; camera facing east. - Mare Island Naval Shipyard, Mechanics Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf
2016-01-01
This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.
Consistent detection and identification of individuals in a large camera network
NASA Astrophysics Data System (ADS)
Colombo, Alberto; Leung, Valerie; Orwell, James; Velastin, Sergio A.
2007-10-01
In the wake of an increasing number of terrorist attacks, counter-terrorism measures are now a main focus of many research programmes. An important issue for the police is the ability to track individuals and groups reliably through underground stations, and in the case of post-event analysis, to be able to ascertain whether specific individuals have been at the station previously. While there exist many motion detection and tracking algorithms, the reliable deployment of them in a large network is still ongoing research. Specifically, to track individuals through multiple views, on multiple levels and between levels, consistent detection and labelling of individuals is crucial. In view of these issues, we have developed a change detection algorithm to work reliably in the presence of periodic movements, e.g. escalators and scrolling advertisements, as well as a content-based retrieval technique for identification. The change detection technique automatically extracts periodically varying elements in the scene using Fourier analysis, and constructs a Markov model for the process. Training is performed online, and no manual intervention is required, making this system suitable for deployment in large networks. Experiments on real data shows significant improvement over existing techniques. The content-based retrieval technique uses MPEG-7 descriptors to identify individuals. Given the environment under which the system operates, i.e. at relatively low resolution, this approach is suitable for short timescales. For longer timescales, other forms of identification such as gait, or if the resolution allows, face recognition, will be required.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Explaining Sad People's Memory Advantage for Faces.
Hills, Peter J; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen
2017-01-01
Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people.
Unsupervised real-time speaker identification for daily movies
NASA Astrophysics Data System (ADS)
Li, Ying; Kuo, C.-C. Jay
2002-07-01
The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.
The Role of Face Familiarity in Eye Tracking of Faces by Individuals with Autism Spectrum Disorders
Dawson, Geraldine; Webb, Sara; Murias, Michael; Munson, Jeffrey; Panagiotides, Heracles; Aylward, Elizabeth
2010-01-01
It has been shown that individuals with autism spectrum disorders (ASD) demonstrate normal activation in the fusiform gyrus when viewing familiar, but not unfamiliar faces. The current study utilized eye tracking to investigate patterns of attention underlying familiar versus unfamiliar face processing in ASD. Eye movements of 18 typically developing participants and 17 individuals with ASD were recorded while passively viewing three face categories: unfamiliar non-repeating faces, a repeating highly familiar face, and a repeating previously unfamiliar face. Results suggest that individuals with ASD do not exhibit more normative gaze patterns when viewing familiar faces. A second task assessed facial recognition accuracy and response time for familiar and novel faces. The groups did not differ on accuracy or reaction times. PMID:18306030
Real Time 3D Facial Movement Tracking Using a Monocular Camera
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-01-01
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714
Real Time 3D Facial Movement Tracking Using a Monocular Camera.
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-07-25
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.
Ground radar detection of meteoroids in space
NASA Technical Reports Server (NTRS)
Kessler, D. J.; Landry, P. M.; Gabbard, J. R.; Moran, J. L. T.
1980-01-01
A special test to lower the detection threshold for satellite fragments potentially dangerous to spacecraft was carried out by NORAD for NASA, using modified radar software. The Perimeter Acquisition Radar Attack Characterization System, a large, planar face, phased radar, operates at a nominal 430 MHz and produces 120 pulses per second, 45 of which were dedicated to search. In a time period of 8.4 hours of observations over three days, over 6000 objects were detected and tracked of which 37 were determined to have velocities greater than escape velocity. Six of these were larger objects with radar cross sections greater than 0.1 sq m and were probably orbiting satellites. A table gives the flux of both observed groups.
Eye tracking reveals a crucial role for facial motion in recognition of faces by infants
Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-01-01
Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387
Through the eyes of the own-race bias: eye-tracking and pupillometry during face recognition.
Wu, Esther Xiu Wen; Laeng, Bruno; Magnussen, Svein
2012-01-01
People are generally better at remembering faces of their own race than faces of a different race, and this effect is known as the own-race bias (ORB) effect. We used eye-tracking and pupillometry to investigate whether Caucasian and Asian face stimuli elicited different-looking patterns in Caucasian participants in a face-memory task. Consistent with the ORB effect, we found better recognition performance for own-race faces than other-race faces, and shorter response times. In addition, at encoding, eye movements and pupillary responses to Asian faces (i.e., the other race) were different from those to Caucasian faces (i.e., the own race). Processing of own-race faces was characterized by more active scanning, with a larger number of shorter fixations, and more frequent saccades. Moreover, pupillary diameters were larger when viewing other-race than own-race faces, suggesting a greater cognitive effort when encoding other-race faces.
ERIC Educational Resources Information Center
Liu, Shaoying; Quinn, Paul C.; Wheeler, Andrea; Xiao, Naiqi; Ge, Liezhong; Lee, Kang
2011-01-01
Fixation duration for same-race (i.e., Asian) and other-race (i.e., Caucasian) female faces by Asian infant participants between 4 and 9 months of age was investigated with an eye-tracking procedure. The age range tested corresponded with prior reports of processing differences between same- and other-race faces observed in behavioral looking time…
NASA Astrophysics Data System (ADS)
Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina
2018-03-01
Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.
Hand motion segmentation against skin colour background in breast awareness applications.
Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick
2004-01-01
Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.
Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.
Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-06-01
Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).
14. VIEW SHOWING UPSTREAM FACE OF HORSE MESA. TRACK FROM ...
14. VIEW SHOWING UPSTREAM FACE OF HORSE MESA. TRACK FROM AGGREGATE BARGES TO MIXING PLANT IS AT LOWER LEFT, RIGHT SPILLWAY CHUTE IS TAKING FORM AT UPPER RIGHT April 29, 1927 - Horse Mesa Dam, Salt River, 65 miles East of Phoenix, Phoenix, Maricopa County, AZ
2011-09-01
and bond integrity. Lastly, the PZT transducers are also utilized to track the lower frequency mechanical strains created during fatigue loading...face of the coupon and on either side of the gage section. Each coupon undergoes cyclic tensile loading to initiate and grow fatigue cracks. At...various intervals, the fatigue cycling is paused and the coupon is visually inspected for crack initiation and growth. While the cycling is paused
Kotani, Manato; Shimono, Kohei; Yoneyama, Toshihiro; Nakako, Tomokazu; Matsumoto, Kenji; Ogi, Yuji; Konoike, Naho; Nakamura, Katsuki; Ikeda, Kazuhito
2017-09-01
Eye tracking systems are used to investigate eyes position and gaze patterns presumed as eye contact in humans. Eye contact is a useful biomarker of social communication and known to be deficient in patients with autism spectrum disorders (ASDs). Interestingly, the same eye tracking systems have been used to directly compare face scanning patterns in some non-human primates to those in human. Thus, eye tracking is expected to be a useful translational technique for investigating not only social attention and visual interest, but also the effects of psychiatric drugs, such as oxytocin, a neuropeptide that regulates social behavior. In this study, we report on a newly established method for eye tracking in common marmosets as unique New World primates that, like humans, use eye contact as a mean of communication. Our investigation was aimed at characterizing these primates face scanning patterns and evaluating the effects of oxytocin on their eye contact behavior. We found that normal common marmosets spend more time viewing the eyes region in common marmoset's picture than the mouth region or a scrambled picture. In oxytocin experiment, the change in eyes/face ratio was significantly greater in the oxytocin group than in the vehicle group. Moreover, oxytocin-induced increase in the change in eyes/face ratio was completely blocked by the oxytocin receptor antagonist L-368,899. These results indicate that eye tracking in common marmosets may be useful for evaluating drug candidates targeting psychiatric conditions, especially ASDs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Preferential attention to animals and people is independent of the amygdala
Tsuchiya, Naotsugu; New, Joshua; Hurlemann, Rene; Adolphs, Ralph
2015-01-01
The amygdala is thought to play a critical role in detecting salient stimuli. Several studies have taken ecological approaches to investigating such saliency, and argue for domain-specific effects for processing certain natural stimulus categories, in particular faces and animals. Linking this to the amygdala, neurons in the human amygdala have been found to respond strongly to faces and also to animals. However, the amygdala’s necessary role for such category-specific effects at the behavioral level remains untested. Here we tested four rare patients with bilateral amygdala lesions on an established change-detection protocol. Consistent with prior published studies, healthy controls showed reliably faster and more accurate detection of people and animals, as compared with artifacts and plants. So did all four amygdala patients: there were no differences in phenomenal change blindness, in behavioral reaction time to detect changes or in eye-tracking measures. The findings provide decisive evidence against a critical participation of the amygdala in rapid initial processing of attention to animate stimuli, suggesting that the necessary neural substrates for this phenomenon arise either in other subcortical structures (such as the pulvinar) or within the cortex itself. PMID:24795434
Explaining Sad People’s Memory Advantage for Faces
Hills, Peter J.; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen
2017-01-01
Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people. PMID:28261138
Shechner, Tomer; Jarcho, Johanna M.; Britton, Jennifer C.; Leibenluft, Ellen; Pine, Daniel S.; Nelson, Eric E.
2012-01-01
Background Previous studies demonstrate that anxiety is characterized by biased attention toward threats, typically measured by differences in motor reaction time to threat and neutral cues. Using eye-tracking methodology, the current study measured attention biases in anxious and nonanxious youth, using unrestricted free viewing of angry, happy, and neutral faces. Methods Eighteen anxious and 15 nonanxious youth (8–17 years old) passively viewed angry-neutral and happy-neutral face pairs for 10 s while their eye movements were recorded. Results Anxious youth displayed a greater attention bias toward angry faces than nonanxious youth, and this bias occurred in the earliest phases of stimulus presentation. Specifically, anxious youth were more likely to direct their first fixation to angry faces, and they made faster fixations to angry than neutral faces. Conclusions Consistent with findings from earlier, reaction-time studies, the current study shows that anxious youth, like anxious adults, exhibit biased orienting to threat-related stimuli. This study adds to the existing literature by documenting that threat biases in eye-tracking patterns are manifest at initial attention orienting. PMID:22815254
Anti Theft Mechanism Through Face recognition Using FPGA
NASA Astrophysics Data System (ADS)
Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya
2012-11-01
The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography
Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.
2016-01-01
Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800
Electronic properties of prismatic modifications of single-wall carbon nanotubes
NASA Astrophysics Data System (ADS)
Tomilin, O. B.; Muryumin, E. E.; Rodionova, E. V.; Ryskina, N. P.
2018-01-01
The article shows the possibility of target modifying the prismatic single-walled carbon nanotubes (SWCNTs) by regular chemisorption of fluorine atoms in the graphene surface. It is shown that the electronic properties of prismatic SWCNT modifications are determined by the interaction of π- and ρ(in-plane)-electron conjugation in the carbon-conjugated subsystems (tracks) formed in the faces. The contributions of π- and ρ(in-plane)-electron conjugation depend on the structural characteristics of the tracks. It was found that the minimum of degree deviation of the track from the plane of the prism face and the maximum of the track width ensure the maximum contribution of the π-electron conjugation, and the band gap of the prismatic modifications of the SWCNT tends to the band gap of the hydrocarbon analog of the carbon track. It is established that the maximum of degree deviation of the track from the plane of the prism face and the maximum of track width ensure the maximum contribution of the ρ(in-plane) electron interface, and the band gap of the prismatic modifications of the SWCNT tends to the band gap of the unmodified carbon nanotube. The calculation of the model systems has been carried out using an ab initio Hartree-Fock method in the 3-21G basis.
Arizpe, Joseph; Kravitz, Dwight J; Walsh, Vincent; Yovel, Galit; Baker, Chris I
2016-01-01
The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis.
Arizpe, Joseph; Kravitz, Dwight J.; Walsh, Vincent; Yovel, Galit; Baker, Chris I.
2016-01-01
The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis. PMID:26849447
ERIC Educational Resources Information Center
Wagner, Jennifer B.; Hirsch, Suzanna B.; Vogel-Farley, Vanessa K.; Redcay, Elizabeth; Nelson, Charles A.
2013-01-01
Individuals with autism spectrum disorder (ASD) often have difficulty with social-emotional cues. This study examined the neural, behavioral, and autonomic correlates of emotional face processing in adolescents with ASD and typical development (TD) using eye-tracking and event-related potentials (ERPs) across two different paradigms. Scanning of…
Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study
ERIC Educational Resources Information Center
Farzin, Faraz; Rivera, Susan M.; Hessl, David
2009-01-01
Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…
ERIC Educational Resources Information Center
Nix, J. Vincent; Michalak, Megan B.
2012-01-01
Students entering college face many obstacles to success. Students who received a General Education Development (GED) face additional barriers that must be addressed in order for success in higher education. The Successful Transitions and Retention Track Program employs a holistic approach to addressing the needs of GED holders entering college.
Tracking and Student Achievement: The Role of Instruction as a Mediator
ERIC Educational Resources Information Center
Schmidt, Rebecca Anne
2013-01-01
Most public schools and districts must face the problem of how to help low-achieving students and efficiently target resources, particularly in the face of accountability under No Child Left Behind. One policy that has been employed is grouping students into classrooms by their measured or perceived ability--a process known as tracking. Research…
Is the Thatcher Illusion Modulated by Face Familiarity? Evidence from an Eye Tracking Study
2016-01-01
Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candidates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modulates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcherisation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixations for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sensitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. PMID:27776145
Eye/head tracking technology to improve HCI with iPad applications.
Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña
2015-01-22
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future.
Eye/Head Tracking Technology to Improve HCI with iPad Applications
Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña
2015-01-01
In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. PMID:25621603
Keeping on Track: Performance Profiles of Low Performers in Academic Educational Tracks
ERIC Educational Resources Information Center
Reed, Helen C.; van Wesel, Floryt; Ouwehand, Carolijn; Jolles, Jelle
2015-01-01
In countries with high differentiation between academic and vocational education, an individual's future prospects are strongly determined by the educational track to which he or she is assigned. This large-scale, cross-sectional study focuses on low-performing students in academic tracks who face being moved to a vocational track. If more is…
Prades, J; Espinàs, J A; Font, R; Argimon, J M; Borràs, J M
2011-01-01
Background: The Cancer Fast-track Programme's aim was to reduce the time that elapsed between well-founded suspicion of breast, colorectal and lung cancer and the start of initial treatment in Catalonia (Spain). We sought to analyse its implementation and overall effectiveness. Methods: A quantitative analysis of the programme was performed using data generated by the hospitals on the basis of seven fast-track monitoring indicators for the period 2006–2009. In addition, we conducted a qualitative study, based on 83 semistructured interviews with primary and specialised health professionals and health administrators, to obtain their perception of the programme's implementation. Results: About half of all new patients with breast, lung or colorectal cancer were diagnosed via the fast track, though the cancer detection rate declined across the period. Mean time from detection of suspected cancer in primary care to start of initial treatment was 32 days for breast, 30 for colorectal and 37 for lung cancer (2009). Professionals associated with the implementation of the programme showed that general practitioners faced with suspicion of cancer had changed their conduct with the aim of preventing lags. Furthermore, hospitals were found to have pursued three specific implementation strategies (top-down, consensus-based and participatory), which made for the cohesion and sustainability of the circuits. Conclusion: The programme has contributed to speeding up diagnostic assessment and treatment of patients with suspicion of cancer, and to clarifying the patient pathway between primary and specialised care. PMID:21829194
Richmond, Jenny L; Power, Jessica
2014-09-01
Relational memory, or the ability to bind components of an event into a network of linked representations, is a primary function of the hippocampus. Here we extend eye-tracking research showing that infants are capable of forming memories for the relation between arbitrarily paired scenes and faces, by looking at age-related changes in relational memory over the first year of life. Six- and 12-month-old infants were familiarized with pairs of faces and scenes before being tested with arrays of three familiar faces that were presented on a familiar scene. Preferential looking at the face that matches the scene is typically taken as evidence of relational memory. The results showed that while 6-month-old showed very early preferential looking when face/scene pairs were tested immediately, 12-month-old did not exhibit evidence of relational memory either immediately or after a short delay. Theoretical implications for the functional development of the hippocampus and practical implications for the use of eye tracking to measure memory during early life are discussed. © 2014 Wiley Periodicals, Inc.
An Eye Tracking Investigation of Attentional Biases towards Affect in Young Children
ERIC Educational Resources Information Center
Burris, Jessica L.; Barry-Anwar, Ryan A.; Rivera, Susan M.
2017-01-01
This study examines attentional biases in the presence of angry, happy and neutral faces using a modified eye tracking version of the dot probe task (DPT). Participants were 111 young children between 9 and 48 months. Children passively viewed an affective attention bias task that consisted of a face pairing (neutral paired with either neutral,…
Lift Every Voice and Sing: Faculty of Color Face the Challenges of the Tenure Track
ERIC Educational Resources Information Center
Garrison-Wade, Dorothy F.; Diggs, Gregory A.; Estrada, Diane; Galindo, Rene
2012-01-01
This article highlights some of the obstacles facing tenure-track faculty of color in academia. Through the perspective of Critical Race Theory (CRT) and by using a counterstories method, four faculty of color share their experiences as they explore diversity issues through engaging in a 1-year self-study. Findings of this qualitative study…
Issues in national missile defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
1998-12-01
Strategic missiles and weapons are proliferating rapidly; thus, the US and its Allies are likely to face both capable bilateral threats and multilateral configurations with complex coalitions for which defenses could be essential for stability. Current hit-to-kill interceptor and radar and infrared detection, track, and discrimination technology should suffice for limited threats, but it is necessary to meet those threats in time while maintaining growth potential for the more sophisticated threats likely to follow. National Missile Defense faces a confusing array of threats, programs, and alternatives, but the technologies in development are clearly an appropriate first step towards any ofmore » them. They are likely to succeed in the near term; the challenge is to retain flexibility to provide needed options in the mid and long terms.« less
Nozadi, Sara S; Spinrad, Tracy L; Johnson, Scott P; Eisenberg, Nancy
2018-06-01
The current study examined whether an important temperamental characteristic, effortful control (EC), moderates the associations between dispositional anger and sadness, attention biases, and social functioning in a group of preschool-aged children (N = 77). Preschoolers' attentional biases toward angry and sad facial expressions were assessed using eye-tracking, and we obtained teachers' reports of children's temperament and social functioning. Associations of dispositional anger and sadness with time looking at relevant negative emotional stimuli were moderated by children's EC, but relations between time looking at emotional faces and indicators of social functioning, for the most part, were direct and not moderated by EC. In particular, time looking at angry faces (and low EC) predicted high levels of aggressive behaviors, whereas longer time looking at sad faces (and high EC) predicted higher social competence. Finally, latency to detect angry faces predicted aggressive behavior under conditions of average and low levels of EC. Findings are discussed in terms of the importance of differentiating between components of attention biases toward distinct negative emotions, and implications for attention training. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Large scale track analysis for wide area motion imagery surveillance
NASA Astrophysics Data System (ADS)
van Leeuwen, C. J.; van Huis, J. R.; Baan, J.
2016-10-01
Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.
6. INTERIOR, ORIGINAL BLOCKHOUSE SECTION OF BUILDING 0512, NORTH WALL ...
6. INTERIOR, ORIGINAL BLOCKHOUSE SECTION OF BUILDING 0512, NORTH WALL FACING TEST TRACK. - Edwards Air Force Base, South Base Sled Track, Instrumentation & Control Building, South of Sled Track, Station "50" area, Lancaster, Los Angeles County, CA
Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa
2014-01-01
Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102
Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder
ERIC Educational Resources Information Center
McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine
2011-01-01
This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…
Current development of UAV sense and avoid system
NASA Astrophysics Data System (ADS)
Zhahir, A.; Razali, A.; Mohd Ajir, M. R.
2016-10-01
As unmanned aerial vehicles (UAVs) are now gaining high interests from civil and commercialised market, the automatic sense and avoid (SAA) system is currently one of the essential features in research spotlight of UAV. Several sensor types employed in current SAA research and technology of sensor fusion that offers a great opportunity in improving detection and tracking system are presented here. The purpose of this paper is to provide an overview of SAA system development in general, as well as the current challenges facing UAV researchers and designers.
Chemical Tracking Systems: Not Your Usual Global Positioning System!
ERIC Educational Resources Information Center
Roy, Ken
2007-01-01
The haphazard storing and tracking of chemicals in the laboratory is a serious safety issue facing science teachers. To get control of your chemicals, try implementing a "chemical tracking system". A chemical tracking system (CTS) is a database of chemicals used in the laboratory. If implemented correctly, a CTS will reduce purchasing costs,…
New generation of naval IRST: example of EOMS NG
NASA Astrophysics Data System (ADS)
Maltese, Dominique; Deyla, Olivier; Vernet, Guillaume; Preux, Carole; Hilt, Gisèle; Nougues, Pierre-Olivier, II
2010-04-01
Modern warships ranging from Air Warfare Destroyers to Offshore Patrol Vessels (OPV) and Fast Patrol Boats have to deal with an ever increasing variety of threats, both symmetric and asymmetric, for self-protection. This last category has introduced new requirements for combat systems sensors and effectors: situation awareness in proximity of the own ship has become a priority, as well as the need for new, lethal or non-lethal effectors for timely and proportional response. Naval Combat Systems (CS) architects are then faced with an alternative: they can either use existing CS sensors, C2 and weapons, or else rely on new, specialized equipments. Both approaches have their pros and cons, with the cost issue not necessarily trivial to assess. In this paper, we present a multifunction system that is both a passive IRST (InfraRed Search and Track) sensor, designed to automatically detect and track air and surface threats, and an Electro Optical Director (EOD), capable of providing identification of objects as well as accurate 3D tracks. Following an introduction reviewing the design goals for the equipment, the EOMS NG processing architecture is described (Image & Tracking Processes). Then, system performances are presented for different scenarios provided from Field Tests.
Children with Autism Spectrum Disorder scan own-race faces differently from other-race faces.
Yi, Li; Quinn, Paul C; Fan, Yuebo; Huang, Dan; Feng, Cong; Joseph, Lisa; Li, Jiao; Lee, Kang
2016-01-01
It has been well documented that people recognize and scan other-race faces differently from faces of their own race. The current study examined whether this cross-racial difference in face processing found in the typical population also exists in individuals with Autism Spectrum Disorder (ASD). Participants included 5- to 10-year-old children with ASD (n=29), typically developing (TD) children matched on chronological age (n=29), and TD children matched on nonverbal IQ (n=29). Children completed a face recognition task in which they were asked to memorize and recognize both own- and other-race faces while their eye movements were tracked. We found no recognition advantage for own-race faces relative to other-race faces in any of the three groups. However, eye-tracking results indicated that, similar to TD children, children with ASD exhibited a cross-racial face-scanning pattern: they looked at the eyes of other-race faces longer than at those of own-race faces, whereas they looked at the mouth of own-race faces longer than at that of other-race faces. The findings suggest that although children with ASD have difficulty with processing some aspects of faces, their ability to process face race information is relatively spared. Copyright © 2015 Elsevier Inc. All rights reserved.
Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring
NASA Astrophysics Data System (ADS)
Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.
2017-03-01
This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.
Righi, Giulia; Westerlund, Alissa; Congdon, Eliza L.; Troller-Renfree, Sonya; Nelson, Charles A.
2013-01-01
The goal of the present study was to investigate infants’ processing of female and male faces. We used an event-related potential (ERP) priming task, as well as a visual-paired comparison (VPC) eye tracking task to explore how 7-month-old “female expert” infants differed in their responses to faces of different genders. Female faces elicited larger N290 amplitudes than male faces. Furthermore, infants showed a priming effect for female faces only, whereby the N290 was significantly more negative for novel females compared to primed female faces. The VPC experiment was designed to test whether infants could reliably discriminate between two female and two male faces. Analyses showed that infants were able to differentiate faces of both genders. The results of the present study suggest that 7-month olds with a large amount of female face experience show a processing advantage for forming a neural representation of female faces, compared to male faces. However, the enhanced neural sensitivity to the repetition of female faces is not due to the infants' inability to discriminate male faces. Instead, the combination of results from the two tasks suggests that the differential processing for female faces may be a signature of expert-level processing. PMID:24200421
Caucasian Infants Scan Own- and Other-Race Faces Differently
Wheeler, Andrea; Anzures, Gizelle; Quinn, Paul C.; Pascalis, Olivier; Omrin, Danielle S.; Lee, Kang
2011-01-01
Young infants are known to prefer own-race faces to other race faces and recognize own-race faces better than other-race faces. However, it is entirely unclear as to whether infants also attend to different parts of own- and other-race faces differently, which may provide an important clue as to how and why the own-race face recognition advantage emerges so early. The present study used eye tracking methodology to investigate whether 6- to 10-month-old Caucasian infants (N = 37) have differential scanning patterns for dynamically displayed own- and other-race faces. We found that even though infants spent a similar amount of time looking at own- and other-race faces, with increased age, infants increasingly looked longer at the eyes of own-race faces and less at the mouths of own-race faces. These findings suggest experience-based tuning of the infant's face processing system to optimally process own-race faces that are different in physiognomy from other-race faces. In addition, the present results, taken together with recent own- and other-race eye tracking findings with infants and adults, provide strong support for an enculturation hypothesis that East Asians and Westerners may be socialized to scan faces differently due to each culture's conventions regarding mutual gaze during interpersonal communication. PMID:21533235
Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition
Chakraborty, Anya; Chakrabarti, Bhismadev
2018-01-01
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554
Eye Tracking Reveals a Crucial Role for Facial Motion in Recognition of Faces by Infants
ERIC Educational Resources Information Center
Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-01-01
Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
49 CFR 213.143 - Frog guard rails and guard faces; gage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Frog guard rails and guard faces; gage. 213.143... and guard faces; gage. The guard check and guard face gages in frogs shall be within the limits... frog to the guard line 1 of its guard rail or guarding face, measured across the track at right angles...
49 CFR 213.143 - Frog guard rails and guard faces; gage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Frog guard rails and guard faces; gage. 213.143... and guard faces; gage. The guard check and guard face gages in frogs shall be within the limits... frog to the guard line 1 of its guard rail or guarding face, measured across the track at right angles...
49 CFR 213.143 - Frog guard rails and guard faces; gage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Frog guard rails and guard faces; gage. 213.143... and guard faces; gage. The guard check and guard face gages in frogs shall be within the limits... frog to the guard line 1 of its guard rail or guarding face, measured across the track at right angles...
49 CFR 213.143 - Frog guard rails and guard faces; gage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Frog guard rails and guard faces; gage. 213.143... and guard faces; gage. The guard check and guard face gages in frogs shall be within the limits... frog to the guard line 1 of its guard rail or guarding face, measured across the track at right angles...
49 CFR 213.143 - Frog guard rails and guard faces; gage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Frog guard rails and guard faces; gage. 213.143... and guard faces; gage. The guard check and guard face gages in frogs shall be within the limits... frog to the guard line 1 of its guard rail or guarding face, measured across the track at right angles...
Can Individuals with Autism Abstract Prototypes of Natural Faces?
ERIC Educational Resources Information Center
Gastgeb, Holly Zajac; Wilkinson, Desiree A.; Minshew, Nancy J.; Strauss, Mark S.
2011-01-01
There is a growing amount of evidence suggesting that individuals with autism have difficulty with face processing. One basic cognitive ability that may underlie face processing difficulties is the ability to abstract a prototype. The current study examined prototype formation with natural faces using eye-tracking in high-functioning adults with…
The Role of Early Visual Attention in Social Development
ERIC Educational Resources Information Center
Wagner, Jennifer B.; Luyster, Rhiannon J.; Yim, Jung Yeon; Tager-Flusberg, Helen; Nelson, Charles A.
2013-01-01
Faces convey important information about the social environment, and even very young infants are preferentially attentive to face-like over non-face stimuli. Eye-tracking studies have allowed researchers to examine which features of faces infants find most salient across development, and the present study examined scanning of familiar (i.e.,…
Biracial and Monoracial Infant Own-Race Face Perception: An Eye Tracking Study
ERIC Educational Resources Information Center
Gaither, Sarah E.; Pauker, Kristin; Johnson, Scott P.
2012-01-01
We know that early experience plays a crucial role in the development of face processing, but we know little about how infants learn to distinguish faces from different races, especially for non-Caucasian populations. Moreover, it is unknown whether differential processing of different race faces observed in typically studied monoracial infants…
Vanderwert, Ross E; Westerlund, Alissa; Montoya, Lina; McCormick, Sarah A; Miguel, Helga O; Nelson, Charles A
2015-10-01
Previous studies in infants have shown that face-sensitive components of the ongoing electroencephalogram (the event-related potential, or ERP) are larger in amplitude to negative emotions (e.g., fear, anger) versus positive emotions (e.g., happy). However, it is still unclear whether the negative emotions linked with the face or the negative emotions alone contribute to these amplitude differences. We simultaneously recorded infant looking behaviors (via eye-tracking) and face-sensitive ERPs while 7-month-old infants viewed human faces or animals displaying happy, fear, or angry expressions. We observed that the amplitude of the N290 was greater (i.e., more negative) to angry animals compared to happy or fearful animals; no such differences were obtained for human faces. Eye-tracking data highlighted the importance of the eye region in processing emotional human faces. Infants that spent more time looking to the eye region of human faces showing fearful or angry expressions had greater N290 or P400 amplitudes, respectively. © 2014 Wiley Periodicals, Inc.
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Tools for Protecting the Privacy of Specific Individuals in Video
NASA Astrophysics Data System (ADS)
Chen, Datong; Chang, Yi; Yan, Rong; Yang, Jie
2007-12-01
This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity/behavior analysis.
Is there an age-related positivity effect in visual attention? A comparison of two methodologies.
Isaacowitz, Derek M; Wadlinger, Heather A; Goren, Deborah; Wilson, Hugh R
2006-08-01
Research suggests a positivity effect in older adults' memory for emotional material, but the evidence from the attentional domain is mixed. The present study combined 2 methodologies for studying preferences in visual attention, eye tracking, and dot-probe, as younger and older adults viewed synthetic emotional faces. Eye tracking most consistently revealed a positivity effect in older adults' attention, so that older adults showed preferential looking toward happy faces and away from sad faces. Dot-probe results were less robust, but in the same direction. Methodological and theoretical implications for the study of socioemotional aging are discussed. (c) 2006 APA, all rights reserved
Children and Adults Scan Faces of Own and Other Races Differently
Hu, Chao; Wang, Qiandong; Fu, Genyue; Quinn, Paul C.; Lee, Kang
2014-01-01
Extensive behavioral and neural evidence suggests that processing of own-race faces differs from that of other-race faces in both adults and infants. However, little research has examined whether and how children scan faces of own and other races differently for face recognition. In this eye-tracking study, Chinese children aged from 4 to 7 years and Chinese adults were asked to remember Chinese and Caucasian faces. None of the participants had any direct contact with foreign individuals. Multi-method analyses of eye-tracking data revealed that regardless of age group, proportional fixation duration on the eyes of Chinese faces was significantly lower than that on the eyes of Caucasian faces, whereas proportional fixation duration on the nose and mouth of Chinese faces was significantly higher than that on the nose and mouth of Caucasian faces. In addition, the amplitude of saccades on Chinese faces was significantly lower than that on Caucasian faces, potentially reflecting finer-grained processing for own-race faces. Moreover, adults’ fixation duration/saccade numbers on the whole faces, proportional fixation percentage on the nose, proportional number of saccades between AOIs, and accuracy in recognizing faces were higher than those of children. These results together demonstrated that an abundance of visual experience with own-race faces and a lack of it with other-race faces may result in differential facial scanning in both adults and children. Furthermore, the increased experience of processing faces may result in a more holistic and advanced scanning strategy in Chinese adults. PMID:24929225
A causal relationship between face-patch activity and face-detection behavior.
Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A
2017-04-04
The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.
Renewal of the Attentive Sensing Project
2006-02-07
decisions about target presence or absence, is denoted track before detect . We have investigated joint tracking and detection in the context of the foveal...computationally tractable bounds. 4 Task 2: Sensor Configuration for Tracking and Track Before Detect Task 2 consisted of investigation of attentive...strategy to multiple targets and to track before detect sensors. To apply principles developed in the context of foveal sensors to more immediately
ERIC Educational Resources Information Center
Kubicek, Claudia; de Boisferon, Anne Hillairet; Dupierrix, Eve; Loevenbruck, Helene; Gervain, Judit; Schwarzer, Gudrun
2013-01-01
The present eye-tracking study aimed to investigate the impact of auditory speech information on 12-month-olds' gaze behavior to silently-talking faces. We examined German infants' face-scanning behavior to side-by-side presentation of a bilingual speaker's face silently speaking German utterances on one side and French on the other side, before…
Grossman, Ruth B; Steinhart, Erin; Mitchell, Teresa; McIlvane, William
2015-06-01
Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8-19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gao, Simon S.; Liu, Li; Bailey, Steven T.; Flaxel, Christina J.; Huang, David; Li, Dengwang; Jia, Yali
2016-07-01
Quantification of choroidal neovascularization (CNV) as visualized by optical coherence tomography angiography (OCTA) may have importance clinically when diagnosing or tracking disease. Here, we present an automated algorithm to quantify the vessel skeleton of CNV as vessel length. Initial segmentation of the CNV on en face angiograms was achieved using saliency-based detection and thresholding. A level set method was then used to refine vessel edges. Finally, a skeleton algorithm was applied to identify vessel centerlines. The algorithm was tested on nine OCTA scans from participants with CNV and comparisons of the algorithm's output to manual delineation showed good agreement.
Subliminal smells can guide social preferences.
Li, Wen; Moallem, Isabel; Paller, Ken A; Gottfried, Jay A
2007-12-01
It is widely accepted that unconscious processes can modulate judgments and behavior, but do such influences affect one's daily interactions with other people? Given that olfactory information has relatively direct access to cortical and subcortical emotional circuits, we tested whether the affective content of subliminal odors alters social preferences. Participants rated the likeability of neutral faces after smelling pleasant, neutral, or unpleasant odors delivered below detection thresholds. Odor affect significantly shifted likeability ratings only for those participants lacking conscious awareness of the smells, as verified by chance-level trial-by-trial performance on an odor-detection task. Across participants, the magnitude of this priming effect decreased as sensitivity for odor detection increased. In contrast, heart rate responses tracked odor valence independently of odor awareness. These results indicate that social preferences are subject to influences from odors that escape awareness, whereas the availability of conscious odor information may disrupt such effects.
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
Embracing Non-Tenure Track Faculty: Changing Campuses for the New Faculty Majority
ERIC Educational Resources Information Center
Kezar, Adrianna, Ed.
2012-01-01
The nature of the higher education faculty workforce is radically and fundamentally changing from primarily full-time tenured faculty to non-tenure track faculty. This new faculty majority faces common challenges, including short-term contracts, limited support on campus, and lack of a professional career track. "Embracing Non-Tenure Track…
Submarine Combat Systems Engineering Project Capstone Project
2011-06-06
sonar , imaging, Electronic Surveillance (ES) and communications. These sensors passively detect contacts, which emit... passive sensors is included. A Search Detect Identify Track Decide Engage Assess 3 contact can be sensed by the system as either surface or... Detect Track Avoid Search Detect Identify Track Search Engage Assess Detect Track Avoid Search • SONAR •Imagery •TC • SONAR • SONAR •EW •Imagery •ESM
Human and Animal Sentinels for Shared Health Risks
Rabinowitz, Peter; Scotch, Matthew; Conti, Lisa
2009-01-01
Summary The tracking of sentinel health events in humans in order to detect and manage disease risks facing a larger population is a well accepted technique applied to influenza, occupational conditions, and emerging infectious diseases. Similarly, animal health professionals routinely track disease events in sentinel animal colonies and sentinel herds. The use of animals as sentinels for human health threats, or of humans as sentinels for animal disease risk, dates back at least to the era when coal miners brought caged canaries into mines to provide early warning of toxic gases. Yet the full potential of linking animal and human health information to provide warning of such “shared risks” from environmental hazards has not been realized. Reasons appear to include the professional segregation of human and animal health communities, the separation of human and animal surveillance data, and evidence gaps in the linkages between human and animal responses to environmental health hazards. The One Health initiative and growing international collaboration in response to pandemic threats, coupled with development the fields of informatics and genomics, hold promise for improved sharing of knowledge about sentinel events in order to detect and reduce environmental health threats shared between species. PMID:20148187
Visual selective attention in body dysmorphic disorder, bulimia nervosa and healthy controls.
Kollei, Ines; Horndasch, Stefanie; Erim, Yesim; Martin, Alexandra
2017-01-01
Cognitive behavioral models postulate that selective attention plays an important role in the maintenance of body dysmorphic disorder (BDD). It is suggested that individuals with BDD overfocus on perceived defects in their appearance, which may contribute to the excessive preoccupation with their appearance. The present study used eye tracking to examine visual selective attention in individuals with BDD (n=19), as compared to individuals with bulimia nervosa (BN) (n=21) and healthy controls (HCs) (n=21). Participants completed interviews, questionnaires, rating scales and an eye tracking task: Eye movements were recorded while participants viewed photographs of their own face and attractive as well as unattractive other faces. Eye tracking data showed that BDD and BN participants focused less on their self-rated most attractive facial part than HCs. Scanning patterns in own and other faces showed that BDD and BN participants paid as much attention to attractive as to unattractive features in their own face, whereas they focused more on attractive features in attractive other faces. HCs paid more attention to attractive features in their own face and did the same in attractive other faces. Results indicate an attentional bias in BDD and BN participants manifesting itself in a neglect of positive features compared to HCs. Perceptual retraining may be an important aspect to focus on in therapy in order to overcome the neglect of positive facial aspects. Future research should aim to disentangle attentional processes in BDD by examining the time course of attentional processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Task-irrelevant own-race faces capture attention: eye-tracking evidence.
Cao, Rong; Wang, Shuzhen; Rao, Congquan; Fu, Jia
2013-04-01
To investigate attentional capture by face's race, the current study recorded saccade latencies of eye movement measurements in an inhibition of return (IOR) task. Compared to Caucasian (other-race) faces, Chinese (own-race) faces elicited longer saccade latency. This phenomenon disappeared when faces were inverted. The results indicated that own-race faces capture attention automatically with high-level configural processing. © 2013 The Authors. Scandinavian Journal of Psychology © 2013 The Scandinavian Psychological Associations.
The Face Perception System becomes Species-Specific at 3 Months: An Eye-Tracking Study
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Meary, David; Pascalis, Olivier; Simion, Francesca
2013-01-01
The current study aimed at investigating own- vs. other-species preferences in 3-month-old infants. The infants' eye movements were recorded during a visual preference paradigm to assess whether they show a preference for own-species faces when contrasted with other-species faces. Human and monkey faces, equated for all low-level perceptual…
Development of Visual Preference for Own- versus Other-Race Faces in Infancy
ERIC Educational Resources Information Center
Liu, Shaoying; Xiao, Wen Sara; Xiao, Naiqi G.; Quinn, Paul C.; Zhang, Yueyan; Chen, Hui; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
2015-01-01
Previous research has shown that 3-month-olds prefer own- over other-race faces. The current study used eye-tracking methodology to examine how this visual preference develops with age beyond 3 months and how infants differentially scan between own- and other-race faces when presented simultaneously. We showed own- versus other-race face pairs to…
A Meta-Analytic and Qualitative Review of Online versus Face-to-Face Problem-Based Learning
ERIC Educational Resources Information Center
Jurewitsch, Brian
2012-01-01
Problem-based learning (PBL) is an instructional strategy that is poised for widespread application in the current, growing, on-line digital learning environment. While enjoying a track record as a defensible strategy in face-to-face learning settings, the research evidence is not clear regarding PBL in on-line environments. A review of the…
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Heathcote, L C; Lau, J Y F; Mueller, S C; Eccleston, C; Fox, E; Bosmans, M; Vervoort, T
2017-02-01
Pain is common and can be debilitating in childhood. Theoretical models propose that attention to pain plays a key role in pain outcomes, however, very little research has investigated this in youth. This study examined how anxiety-related variables and attention control interacted to predict children's attention to pain cues using eye-tracking methodology, and their pain tolerance on the cold pressor test (CPT). Children aged 8-17 years had their eye-gaze tracked whilst they viewed photographs of other children displaying painful facial expressions during the CPT, before completing the CPT themselves. Children also completed self-report measures of anxiety and attention control. Findings indicated that anxiety and attention control did not impact children's initial fixations on pain or neutral faces, but did impact how long they dwelled on pain versus neutral faces. For children reporting low levels of attention control, higher anxiety was associated with less dwell time on pain faces as opposed to neutral faces, and the opposite pattern was observed for children with high attention control. Anxiety and attention control also interacted to predict pain outcomes. For children with low attention control, increasing anxiety was associated with anticipating more pain and tolerating pain for less time. This is the first study to examine children's attention to pain cues using eye-tracking technology in the context of a salient painful experience. Data suggest that attention control is an important moderator of anxiety on multiple outcomes relevant to young people's pain experiences. This study uses eye tracking to study attention to pain cues in children. Attention control is an important moderator of anxiety on attention bias to pain and tolerance of cold pressor pain in youth. © 2016 European Pain Federation - EFIC®.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Dolly, S; Anastasio, M
Purpose: In-treatment dynamic cine images, provided by the first commercially available MRI-guided radiotherapy system, allow physicians to observe intrafractional motion of head and neck (H&N) internal structures. Nevertheless, high anatomical complexity and relatively poor cine image contrast/resolution have complicated automatic intrafractional motion evaluation. We proposed an integrated model-based approach to automatically delineate and analyze moving structures from on-board cine images. Methods: The H&N upper airway, a complex and highly deformable region wherein severe internal motion often occurs, was selected as the target-to-be-tracked. To reliably capture its motion, a hierarchical structure model containing three statistical shapes (face, face-jaw, and face-jaw-palate) wasmore » first built from a set of manually delineated shapes using principal component analysis. An integrated model-fitting algorithm was then employed to align the statistical shapes to the first to-be-detected cine frame, and multi-feature level-set contour propagation was performed to identify the airway shape change in the remaining frames. Ninety sagittal cine MR image sets, acquired from three H&N cancer patients, were utilized to demonstrate this approach. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 20 randomly selected images from each patient. The resulting dice similarity coefficient (93.28+/−1.46 %) and margin error (0.49+/−0.12 mm) showed good agreement with the manual results. Intrafractional displacements of anterior, posterior, inferior, and superior airway boundaries were observed, with values of 2.62+/−2.92, 1.78+/−1.43, 3.51+/−3.99, and 0.68+/−0.89 mm, respectively. The H&N airway motion was found to vary across directions, fractions, and patients, and highly correlated with patients’ respiratory frequency. Conclusion: We proposed the integrated computational approach, which for the first time allows to automatically identify the H&N upper airway and quantify in-treatment H&N internal motion in real-time. This approach can be applied to track other structures’ motion, and provide guidance on patient-specific prediction of intra-/inter-fractional structure displacements.« less
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Managing Information Technology: Facing the Issues. Track VI: Academic Computing Issues.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers making up Track VI of the 1989 conference of the Professional Association for the Management of Information Technology in Higher Education (known as CAUSE, an acronym of the association's former name) are presented in this document. The focus of Track VI is on academic computing issues, and the papers include: "Loan-a-Mac: A…
Managing Information Technology: Facing the Issues. Track II: Funding and Accountability Issues.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers making up Track II of the 1989 conference of the Professional Association for the Management of Information Technology in Higher Education (known as CAUSE, an acronym for the association's former name) are presented in this document. The focus of Track II is on funding and accountability issues, and the papers include: "A…
Managing Information Technology: Facing the Issues. Track III: Organization and Personnel Issues.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Seven papers making up Track III of the 1989 conference of the Professional Association for the Management of Information Technology in Higher Education (known as CAUSE, an acronym of the association's former name) are presented in this document. The focus of Track III is on organization and personnel issues, and the papers include: "How to…
Managing Information Technology: Facing the Issues. Track IV: Policy and Standards Issues.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Seven papers making up Track IV of the 1989 conference of the Professional Association for the Management of Information Technology in Higher Education (known as CAUSE; an acronym of the association's former name) are presented in this document. The focus of Track IV is on policy and standards issues and the papers include: "Developing…
Managing Information Technology: Facing the Issues. Track VII: Applications and Technology Issues.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers making up Track VII of the 1989 conference of the Professional Association for the Management of Information Technology in Higher Education (known as CAUSE, an acronym of the association's former name) are presented in this document. The focus of Track VII is on applications and technology issues, and the papers include: "The…
Impaired face detection may explain some but not all cases of developmental prosopagnosia.
Dalrymple, Kirsten A; Duchaine, Brad
2016-05-01
Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
Greater sensitivity of the cortical face processing system to perceptually-equated face detection
Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.
2015-01-01
Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
ERIC Educational Resources Information Center
Vabalas, Andrius; Freeth, Megan
2016-01-01
The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking…
Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)
ERIC Educational Resources Information Center
Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.
2005-01-01
In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…
Identity from Variation: Representations of Faces Derived from Multiple Instances
ERIC Educational Resources Information Center
Burton, A. Mike; Kramer, Robin S. S.; Ritchie, Kay L.; Jenkins, Rob
2016-01-01
Research in face recognition has tended to focus on discriminating between individuals, or "telling people apart." It has recently become clear that it is also necessary to understand how images of the same person can vary, or "telling people together." Learning a new face, and tracking its representation as it changes from…
The wide window of face detection.
Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul
2010-08-20
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
Barber, Anjuli L. A.; Randi, Dania; Müller, Corsin A.; Huber, Ludwig
2016-01-01
From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. PMID:27074009
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Automatically Detect and Track Multiple Fish Swimming in Shallow Water with Frequent Occlusion
Qian, Zhi-Ming; Cheng, Xi En; Chen, Yan Qiu
2014-01-01
Due to its universality, swarm behavior in nature attracts much attention of scientists from many fields. Fish schools are examples of biological communities that demonstrate swarm behavior. The detection and tracking of fish in a school are of important significance for the quantitative research on swarm behavior. However, different from other biological communities, there are three problems in the detection and tracking of fish school, that is, variable appearances, complex motion and frequent occlusion. To solve these problems, we propose an effective method of fish detection and tracking. In this method, first, the fish head region is positioned through extremum detection and ellipse fitting; second, The Kalman filtering and feature matching are used to track the target in complex motion; finally, according to the feature information obtained by the detection and tracking, the tracking problems caused by frequent occlusion are processed through trajectory linking. We apply this method to track swimming fish school of different densities. The experimental results show that the proposed method is both accurate and reliable. PMID:25207811
Exogenous Social Identity Cues Differentially Affect the Dynamic Tracking of Individual Target Faces
ERIC Educational Resources Information Center
Allen, Roy; Gabbert, Fiona
2013-01-01
We report on an experiment to investigate the top-down effect of exogenous social identity cues on a multiple-identity tracking task, a paradigm well suited to investigate the processes of binding identity to spatial locations. Here we simulated an eyewitness event in which dynamic targets, all to be tracked with equal effort, were identified from…
Rodríguez-Canosa, Gonzalo; Giner, Jaime del Cerro; Barrientos, Antonio
2014-01-01
The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed. PMID:24526305
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
Bushong, Eric A; Johnson, Donald D; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H
2015-02-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging.
Bushong, Eric A.; Johnson, Donald D.; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T.; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H.
2015-01-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging. PMID:25392009
Radar Detection of Marine Mammals
2011-09-30
BFT-BPT algorithm for use with our radar data. This track - before - detect algorithm had been effective in enhancing small but persistent signatures in...will be possible with the detect before track algorithm. 4 We next evaluated the track before detect algorithm, the BFT-BPT, on the CEDAR data
Penalty dynamic programming algorithm for dim targets detection in sensor systems.
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.
System considerations for detection and tracking of small targets using passive sensors
NASA Astrophysics Data System (ADS)
DeBell, David A.
1991-08-01
Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.
ERIC Educational Resources Information Center
Chawarska, Katarzyna; Shic, Frederick
2009-01-01
This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…
Pongakkasira, Kaewmart; Bindemann, Markus
2015-04-01
Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Velocity field calculation for non-orthogonal numerical grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
2015-03-01
Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation,more » and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non-orthogonal grid, Darcy velocity components are rigorously derived in this study from normal fluxes to cell faces, which are assumed to be provided by or readily computed from porous-medium simulation code output. The normal fluxes are presumed to satisfy mass balances for every computational cell, and if so, the derived velocity fields are consistent with these mass balances. Derivations are provided for general two-dimensional quadrilateral and three-dimensional hexagonal systems, and for the commonly encountered special cases of perfectly vertical side faces in 2D and 3D and a rectangular footprint in 3D.« less
Statistical Analysis of Online Eye and Face-tracking Applications in Marketing
NASA Astrophysics Data System (ADS)
Liu, Xuan
Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.
Face detection and eyeglasses detection for thermal face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2012-01-01
Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.
Spontaneous Attention to Faces in Asperger Syndrome Using Ecologically Valid Static Stimuli
ERIC Educational Resources Information Center
Hanley, Mary; McPhillips, Martin; Mulhern, Gerry; Riby, Deborah M.
2013-01-01
Previous eye tracking research on the allocation of attention to social information by individuals with autism spectrum disorders is equivocal and may be in part a consequence of variation in stimuli used between studies. The current study explored attention allocation to faces, and within faces, by individuals with Asperger syndrome using a range…
Xe- and U-tracks in apatite and muscovite near the etching threshold
NASA Astrophysics Data System (ADS)
Wauschkuhn, Bastian; Jonckheere, Raymond; Ratschbacher, Lothar
2015-01-01
Ion irradiation of a wedge-shaped Durango apatite backed by a mica detector allows investigating ion track ranges and etching properties at different points along the tracks. Transmission profiles obtained by irradiation with 2 × 106 cm-2 11.1 MeV/amu 132Xe and 2 × 106 cm-2 11.1 MeV/amu 238U parallel to the apatite c-axis correspond to ranges calculated with SRIM (Xe: 76.3 μm; U: 81.1 μm). However, the measured profiles show much greater etchable track-length variations than the calculated longitudinal straggles. The probable cause is that the length deficit exhibits significant variation from track to track. The measured length deficit in muscovite is in agreement with most existing data. In contrast, the length deficit in apatite appears to be close to zero, which is in conflict with all earlier estimates. This probably results from the etching properties of the apatite basal face, which permit surface-assisted sub-threshold etching of track sections in the nuclear stopping regime. These sections are not accessible from the opposite direction, i.e. by etching towards the endpoint of the tracks or in the direction of the ion beam. This conclusion is supported by the fact that linear dislocations are revealed in apatite basal faces and by the observation of imperfect etch pits that are separated from the etched ion track channel by a section that appears unetched under the microscope.
Proceedings of the 8th Matched-Field Processing Workshop, 12-14 June 1996,
1996-10-01
and M. B. Porter Active Matched-Field Tracking (AMFT) ............................................ 29 Homer Bucker Matched-Field Track - Before - Detect (TBD...CD I- z - .4 U) - U :T 0 4,) 0j w CfI -ID 0 ci) CD) CD CD o0 0 C 0D CD 0C o 00 Matched-Field Track - Before - Detect (TBD) Processing using SWellEX...surfaces are used in a source-track search. Track - before - detect (TBD) processing makes use of this technique to extract source track information so that the
NASA Astrophysics Data System (ADS)
Coffer, Amy Beth
Radiation imagers are import tools in the modern world for a wide range of applications. They span the use-cases of fundamental sciences, astrophysics, medical imaging, all the way to national security, nuclear safeguards, and non-proliferation verification. The type of radiation imagers studied in this thesis were gamma-ray imagers that detect emissions from radioactive materials. Gamma-ray imagers goal is to localize and map the distribution of radiation within their specific field-of-view despite the fact of complicating background radiation that can be terrestrial, astronomical, and temporal. Compton imaging systems are one type of gamma-ray imager that can map the radiation around the system without the use of collimation. Lack of collimation enables the imaging system to be able to detect radiation from all-directions, while at the same time, enables increased detection efficiency by not absorbing incident radiation in non-sensing materials. Each Compton-scatter events within an imaging system generated a possible cone-surface in space that the radiation could have originated from. Compton imaging is limited in its reconstructed image signal-to-background due to these source Compton-cones overlapping with background radiation Compton-cones. These overlapping cones limit Compton imaging's detection-sensitivity in image space. Electron-tracking Compton imaging (ETCI) can improve the detection-sensitivity by measuring the Compton-scattered electron's initial trajectory. With an estimate of the scattered electron's trajectory, one can reduce the Compton-back-projected cone to a cone-arc, thus enabling faster radiation source detection and localization. However, the ability to measure the Compton-scattered electron-trajectories adds another layer of complexity to an already complex methodology. For a real-world imaging applications, improvements are needed in electron-track detection efficiency and in electron-track reconstruction. One way of measuring Compton-scattered electron-trajectories is with high-resolution Charged-Coupled Devices (CCDs). The proof-of-principle CCD-based ETCI experiment demonstrated the CCDs' ability to measure the Compton-scattered electron-tracks as a 2-dimensional image. Electron-track-imaging algorithms using the electron-track-image are able to determine the 3-dimensional electron-track trajectory within +/- 20 degrees. The work presented here is the physics simulations developed along side the experimental proof-of-principle experiment. The development of accurate physics modeling for multiple-layer CCDs based ETCI systems allow for the accurate prediction of future ETCI system performance. The simulations also enable quick development insights for system design, and they guide the development of electron-track reconstruction methods. The physics simulation efforts for this project looked closely at the accuracy of the Geant4 Monte Carlo methods for medium energy electron transport. In older version of Geant4 there were some discrepancies between the electron-tracking experimental measurements and the simulation results. It was determined that when comparing the electron dynamics of electrons at very high resolutions, Geant4 simulations must be fine tuned with careful choices for physics production cuts and electron physics stepping sizes. One result of this work is a CCDs Monte Carlo model that has been benchmarked to experimental findings and fully characterized for both photon and electron transport. The CCDs physics model now match to within 1 percent error of experimental results for scattered-electron energies below 500 keV. Following the improvements of the CCDs simulations, the performance of a realistic two-layer CCD-stack system was characterized. The realistic CCD-stack system looked at the effect of thin passive-layers on the CCDs' front face and back-contact. The photon interaction efficiency was calculated for the two-layer CCD-stack, and we found that there is a 90 percent probability of scattered-electrons from a 662 keV source to stay within a single active layer. This demonstrates the improved detection efficiency, which is one of the strengths of the CCDs' implementation as a ETCI system. The CCD-stack simulations also established that electron-tracks scattering from one CCDs layer to another could be reconstructed. The passive-regions on the CCD-stack mean that these inter-layer scattered-electron-tracks will always loose both angular information and energy information. Looking at the angular changes of these electrons scattering between the CCDs layers showed us there is not a strong energy dependence on the angular changes due to the passive-regions of the CCDs. The angular changes of the electron track are, for the most part, a function of the thickness of the thin back-layer of the CCDs. Lastly, an approach using CCD-stack simulations was developed to reconstruct the energy transport across dead-layers and its feasibility was demonstrated. Adding back this lost energy will limit the loss of energy resolution of the scatter-interactions. Energy resolution losses would negatively impacted the achievable image resolution from image reconstruction algorithms. Returning some of the energy back to the reconstructed electron-track will help retain the expected performance of the electron-track trajectory determination algorithm.
Artificial Immune System for Recognizing Patterns
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2005-01-01
A method of recognizing or classifying patterns is based on an artificial immune system (AIS), which includes an algorithm and a computational model of nonlinear dynamics inspired by the behavior of a biological immune system. The method has been proposed as the theoretical basis of the computational portion of a star-tracking system aboard a spacecraft. In that system, a newly acquired star image would be treated as an antigen that would be matched by an appropriate antibody (an entry in a star catalog). The method would enable rapid convergence, would afford robustness in the face of noise in the star sensors, would enable recognition of star images acquired in any sensor or spacecraft orientation, and would not make an excessive demand on the computational resources of a typical spacecraft. Going beyond the star-tracking application, the AIS-based pattern-recognition method is potentially applicable to pattern- recognition and -classification processes for diverse purposes -- for example, reconnaissance, detecting intruders, and mining data.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Face detection assisted auto exposure: supporting evidence from a psychophysical study
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani
2010-01-01
Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.
Facial feature tracking: a psychophysiological measure to assess exercise intensity?
Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G
2018-04-01
The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2 = 0.24-0.31), HR (r 2 = 0.26-0.33), [La - ] (r 2 = 0.33-0.44) and RPE (r 2 = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.
1994-07-01
1993. "Analysis of the 1730-1732. Track - Before - Detect Approach to Target Detection using Pixel Statistics", to appear in IEEE Transactions Scholz, J...large surveillance arrays. One approach to combining energy in different spatial cells is track - before - detect . References to examples appear in the next... track - before - detect problem. The results obtained are not expected to depend strongly on model details. In particular, the structure of the tracking
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Distinct frontal and amygdala correlates of change detection for facial identity and expression
Achaibou, Amal; Loth, Eva
2016-01-01
Recruitment of ‘top-down’ frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in ‘bottom-up’ attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate ‘hit’ from ‘miss’ trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience. PMID:26245835
Real-time detection with AdaBoost-svm combination in various face orientation
NASA Astrophysics Data System (ADS)
Fhonna, R. P.; Nasution, M. K. M.; Tulus
2018-03-01
Most of the research has used algorithm AdaBoost-SVM for face detection. However, to our knowledge so far there is no research has been facing detection on real-time data with various orientations using the combination of AdaBoost and Support Vector Machine (SVM). Characteristics of complex and diverse face variations and real-time data in various orientations, and with a very complex application will slow down the performance of the face detection system this becomes a challenge in this research. Face orientation performed on the detection system, that is 900, 450, 00, -450, and -900. This combination method is expected to be an effective and efficient solution in various face orientations. The results showed that the highest average detection rate is on the face detection oriented 00 and the lowest detection rate is in the face orientation 900.
Familiarity facilitates feature-based face processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
2017-01-01
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.
Scalable Track Detection in SAR CCD Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, James G; Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less
Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074
LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat
2014-02-01
For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
2010-01-01
Brown, A., and Brown, J., Enhanced Algorithms for EO /IR Electronic Stabilization, Clutter Suppression, and Track - Before - Detect for Multiple Low...estimation-suppression and nonlinear filtering-based multiple-object track - before - detect . These algorithms are suitable for integration into...In such cases, it is imperative to develop efficient real or near-real time tracking before detection methods. This paper continues the work started
Searching for structural medium changes during the 2011 El Hierro (Spain) submarine eruption
NASA Astrophysics Data System (ADS)
Sánchez-Pastor, Pilar S.; Schimmel, Martin; López, Carmen
2017-04-01
Submarine volcanic eruptions are often difficult to study due to their restricted access that usually inhibits direct observations. That happened with the 2011 El Hierro eruption, which is the first eruption that has been tracked in real time in Canary Islands. For instance, despite the real-time tracking it was not possible to determine the exact end of the eruption. Besides, volcanic eruptions involve many dynamic (physical and chemical) processes, which cause structural changes in the surrounding medium that we expect to observe and monitor through passive seismic approaches. The purpose of this study is to detect and analyse these changes as well as to search for precursory signals to the eruption itself using ambient noise auto and cross-correlations. We employ different correlation strategies (classical and phase cross-correlation) and apply them to field data recorded by the IGN network during 2011 and 2012. The different preprocessing and processing steps are tested and compared to better understand the data, to find the robust signatures, and to define a routine work procedure. One of the problems we face is the presence of volcanic tremors, which cause a varying seismic response that we can not attribute to structural changes. So far, structural changes could not be detected unambiguously and we present our ongoing research in this field.
Event-related potential and eye tracking evidence of the developmental dynamics of face processing.
Meaux, Emilie; Hernandez, Nadia; Carteau-Martin, Isabelle; Martineau, Joëlle; Barthélémy, Catherine; Bonnet-Brilhault, Frédérique; Batty, Magali
2014-04-01
Although the wide neural network and specific processes related to faces have been revealed, the process by which face-processing ability develops remains unclear. An interest in faces appears early in infancy, and developmental findings to date have suggested a long maturation process of the mechanisms involved in face processing. These developmental changes may be supported by the acquisition of more efficient strategies to process faces (theory of expertise) and by the maturation of the face neural network identified in adults. This study aimed to clarify the link between event-related potential (ERP) development in response to faces and the behavioral changes in the way faces are scanned throughout childhood. Twenty-six young children (4-10 years of age) were included in two experimental paradigms, the first exploring ERPs during face processing, the second investigating the visual exploration of faces using an eye-tracking system. The results confirmed significant age-related changes in visual ERPs (P1, N170 and P2). Moreover, an increased interest in the eye region and an attentional shift from the mouth to the eyes were also revealed. The proportion of early fixations on the eye region was correlated with N170 and P2 characteristics, highlighting a link between the development of ERPs and gaze behavior. We suggest that these overall developmental dynamics may be sustained by a gradual, experience-dependent specialization in face processing (i.e. acquisition of face expertise), which produces a more automatic and efficient network associated with effortless identification of faces, and allows the emergence of human-specific social and communication skills. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bocz, Péter; Vinkó, Ákos; Posgay, Zoltán
2018-03-01
This paper presents an automatic method for detecting vertical track irregularities on tramway operation using acceleration measurements on trams. For monitoring of tramway tracks, an unconventional measurement setup is developed, which records the data of 3-axes wireless accelerometers mounted on wheel discs. Accelerations are processed to obtain the vertical track irregularities to determine whether the track needs to be repaired. The automatic detection algorithm is based on time-frequency distribution analysis and determines the defect locations. Admissible limits (thresholds) are given for detecting moderate and severe defects using statistical analysis. The method was validated on frequented tram lines in Budapest and accurately detected severe defects with a hit rate of 100%, with no false alarms. The methodology is also sensitive to moderate and small rail surface defects at the low operational speed.
Information Measures for Statistical Orbit Determination
ERIC Educational Resources Information Center
Mashiku, Alinda K.
2013-01-01
The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
Social Class and the Motivational Relevance of Other Human Beings: Evidence From Visual Attention.
Dietze, Pia; Knowles, Eric D
2016-11-01
We theorize that people's social class affects their appraisals of others' motivational relevance-the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In Study 1, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. In Studies 2a and 2b, participants' eye movements were tracked while they viewed street scenes; higher class was associated with reduced attention to people in the images. In Study 3, a change-detection procedure assessed the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed. © The Author(s) 2016.
Investigation into Cherenkov light scattering and refraction on aerogel surface
NASA Astrophysics Data System (ADS)
Barnyakov, A. Yu.; Barnyakov, M. Yu.; Bobrovnikov, V. S.; Buzykaev, A. R.; Danilyuk, A. F.; Katcin, A. A.; Kirilenko, P. S.; Kononov, S. A.; Korda, D. V.; Kravchenko, E. A.; Kudryavtsev, V. N.; Kuyanov, I. A.; Onuchin, A. P.; Ovtin, I. V.; Podgornov, N. A.; Predein, A. Yu.; Prisekin, V. G.; Protsenko, R. S.; Shekhtman, L. I.
2017-12-01
The work concerns the development of aerogel radiators for RICH detectors. Aerogel tiles with a refractive index of 1.05 were tested with a RICH prototype on the electron beam on the VEPP-4M collider. It has been shown that polishing with silk tissue yields good surface quality, the amount of light loss at this surface being about 5-7%. The Cherenkov angle resolution was measured for a tile in two conditions: with a clean exit face and with a polished exit face. The number of photons detected was 13.3 and 12.7 for the clean and polished surfaces, respectively. The Cherenkov angle resolution for the polished surface is 55% worse, which can be explained with the forward scattering on the polished surface. A tile with a crack inside was also tested. The experimental data show that the Cherenkov angle resolution is the same for tracks crossing the crack area and in a crack-free area.
2012-09-01
as potential tools for large area detection coverage while being moderately inexpensive (Wettergren, Performance of Search via Track - Before - Detect for...via Track - Before - Detect for Distribute 34 Sensor Networks, 2008). These statements highlight three specific needs to further sensor network research...Bay hydrography. Journal of Marine Systems, 12, 221–236. Wettergren, T. A. (2008). Performance of search via track - before - detect for distributed
Grand Challenges Emerging Perspectives For Embedded Processing (BRIEFING CHARTS)
2007-03-06
track before detect Small... Track - before - detect (dim targets) Change detection 16Mpixel 2 Hz 1 km2 1 ft res. STAPBOY Φ1 60W .3k$ AMD server 120 W ~.8k...60 X (m) Y ( m ) −50 0 50 −60 −40 −20 0 20 40 60 X (m) Y ( m ) EO/IR Track - before - detect (dim targets) Change detection RISC/DSP AMD
Efficient search for a face by chimpanzees (Pan troglodytes).
Tomonaga, Masaki; Imura, Tomoko
2015-07-16
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.
Efficient search for a face by chimpanzees (Pan troglodytes)
Tomonaga, Masaki; Imura, Tomoko
2015-01-01
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944
2010-11-01
pected target motion. Along this line, Wettergren [5] analyzed the performance of the track - before - detect schemes for the sensor networks. Furthermore...dressed by Baumgartner and Ferrari [11] for the reorganization of the sensor field to achieve the maximum coverage. The track - before - detect -based optimal...confirming a target. In accordance with the track - before - detect paradigm [4], a moving target is detected if the kd (typically kd = 3 or 4) sensors detect
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
Electro-optic tracking R&D for defense surveillance
NASA Astrophysics Data System (ADS)
Sutherland, Stuart; Woodruff, Chris J.
1995-09-01
Two aspects of work on automatic target detection and tracking for electro-optic (EO) surveillance are described. Firstly, a detection and tracking algorithm test-bed developed by DSTO and running on a PC under Windows NT is being used to assess candidate algorithms for unresolved and minimally resolved target detection. The structure of this test-bed is described and examples are given of its user interfaces and outputs. Secondly, a development by Australian industry under a Defence-funded contract, of a reconfigurable generic track processor (GTP) is outlined. The GTP will include reconfigurable image processing stages and target tracking algorithms. It will be used to demonstrate to the Australian Defence Force automatic detection and tracking capabilities, and to serve as a hardware base for real time algorithm refinement.
ERIC Educational Resources Information Center
Gohn, David; Moore, John
2007-01-01
Underperforming institutions frequently face financial and enrollment challenges, and/or lack a sense of direction and momentum. There is no single or easy approach to turning things around and putting the institution on track to positive development. In 1983, Drury University in Springfield, Missouri, faced declining enrollments, a growing…
Locomotive track detection for underground
NASA Astrophysics Data System (ADS)
Ma, Zhonglei; Lang, Wenhui; Li, Xiaoming; Wei, Xing
2017-08-01
In order to improve the PC-based track detection system, this paper proposes a method to detect linear track for underground locomotive based on DSP + FPGA. Firstly, the analog signal outputted from the camera is sampled by A / D chip. Then the collected digital signal is preprocessed by FPGA. Secondly, the output signal of FPGA is transmitted to DSP via EMIF port. Subsequently, the adaptive threshold edge detection, polar angle and radius constrain based Hough transform are implemented by DSP. Lastly, the detected track information is transmitted to host computer through Ethernet interface. The experimental results show that the system can not only meet the requirements of real-time detection, but also has good robustness.
Searching for differences in race: is there evidence for preferential detection of other-race faces?
Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka
2009-06-01
Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.
Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H
2008-01-01
A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902
Robust vehicle detection in different weather conditions: Using MIPM
Menéndez, José Manuel; Jiménez, David
2018-01-01
Intelligent Transportation Systems (ITS) allow us to have high quality traffic information to reduce the risk of potentially critical situations. Conventional image-based traffic detection methods have difficulties acquiring good images due to perspective and background noise, poor lighting and weather conditions. In this paper, we propose a new method to accurately segment and track vehicles. After removing perspective using Modified Inverse Perspective Mapping (MIPM), Hough transform is applied to extract road lines and lanes. Then, Gaussian Mixture Models (GMM) are used to segment moving objects and to tackle car shadow effects, we apply a chromacity-based strategy. Finally, performance is evaluated through three different video benchmarks: own recorded videos in Madrid and Tehran (with different weather conditions at urban and interurban areas); and two well-known public datasets (KITTI and DETRAC). Our results indicate that the proposed algorithms are robust, and more accurate compared to others, especially when facing occlusions, lighting variations and weather conditions. PMID:29513664
Beach Advisory and Closing Online Notification (BEACON) system
Beach Advisory and Closing Online Notification system (BEACON) is a colletion of state and local data reported to EPA about beach closings and advisories. BEACON is the public-facing query of the Program tracking, Beach Advisories, Water quality standards, and Nutrients database (PRAWN) which tracks beach closing and advisory information.
DETAIL OF ENCLOSED TOP TRACK OF SLIDING DOORS IN LIVING ...
DETAIL OF ENCLOSED TOP TRACK OF SLIDING DOORS IN LIVING ROOM. VIEW FACING NORTH - Camp H.M. Smith and Navy Public Works Center Manana Title VII (Capehart) Housing, Four-Bedroom, Single-Family Type 10, Birch Circle, Elm Drive, Elm Circle, and Date Drive, Pearl City, Honolulu County, HI
Telemetry, Tracking, and Control Working Group report
NASA Technical Reports Server (NTRS)
Campbell, Richard; Rogers, L. Joseph
1986-01-01
After assessing the design implications and the criteria to be used in technology selection, the technical problems that face the telemetry, tracking, and control (TTC) area were defined. For each of the problems identified, recommendations were made for needed technology developments. These recommendations are listed and ranked according to priority.
2010-01-01
target kinematics for multiple sensor detections is referred to as the track - before - detect strategy, and is commonly adopted in multi-sensor surveillance...of moving targets. Wettergren [4] presented an application of track - before - detect strategies to undersea distributed sensor networks. In de- signing...the deployment of a distributed passive sensor network that employs this track - before - detect procedure, it is impera- tive that the placement of
Automatic detection, tracking and sensor integration
NASA Astrophysics Data System (ADS)
Trunk, G. V.
1988-06-01
This report surveys the state of the art of automatic detection, tracking, and sensor integration. In the area of detection, various noncoherent integrators such as the moving window integrator, feedback integrator, two-pole filter, binary integrator, and batch processor are discussed. Next, the three techniques for controlling false alarms, adapting thresholds, nonparametric detectors, and clutter maps are presented. In the area of tracking, a general outline is given of a track-while-scan system, and then a discussion is presented of the file system, contact-entry logic, coordinate systems, tracking filters, maneuver-following logic, tracking initiating, track-drop logic, and correlation procedures. Finally, in the area of multisensor integration the problems of colocated-radar integration, multisite-radar integration, radar-IFF integration, and radar-DF bearing strobe integration are treated.
Dose equivalent neutron dosimeter
Griffith, Richard V.; Hankins, Dale E.; Tomasino, Luigi; Gomaa, Mohamed A. M.
1983-01-01
A neutron dosimeter is disclosed which provides a single measurements indicating the amount of potential biological damage resulting from the neutron exposure of the wearer, for a wide range of neutron energies. The dosimeter includes a detecting sheet of track etch detecting material such as a carbonate plastic, for detecting higher energy neutrons, and a radiator layer containing conversion material such as .sup.6 Li and .sup.10 B lying adjacent to the detecting sheet for converting moderate energy neutrons to alpha particles that produce tracks in the adjacent detecting sheet. The density of conversion material in the radiator layer is of an amount which is chosen so that the density of tracks produced in the detecting sheet is proportional to the biological damage done by neutrons, regardless of whether the tracks are produced as the result of moderate energy neutrons striking the radiator layer or as the result of higher energy neutrons striking the sheet of track etch material.
Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA
2011-12-20
A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.
Track-Before-Declare Methods in IR Image Sequences
1992-09-01
processing methods of this type, known as track- before-declare (TBD), and sometimes by the misleading term track - before - detect , have been employed in systems...Electronic Systems, Vol. AES-il, No. 6. November 1975. 8. A. Corbeil, J. DiDomizio, Track - Before - Detect Development and Demonstration Program, Phase
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-track equipment is not impaired by background noise, lights, precipitation, fog, passing trains, or any... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Multi-Sensor Information Integration and Automatic Understanding
2008-11-01
also produced a real-time implementation of the tracking and anomalous behavior detection system that runs on real- world data – either using real-time...surveillance and airborne IED detection . 15. SUBJECT TERMS Multi-hypothesis tracking , particle filters, anomalous behavior detection , Bayesian...analyst to support decision making with large data sets. A key feature of the real-time tracking and behavior detection system developed is that the
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Robust human detection, tracking, and recognition in crowded urban areas
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
The neural basis of stereotypic impact on multiple social categorization.
Hehman, Eric; Ingbretsen, Zachary A; Freeman, Jonathan B
2014-11-01
Perceivers extract multiple social dimensions from another's face (e.g., race, emotion), and these dimensions can become linked due to stereotypes (e.g., Black individuals → angry). The current research examined the neural basis of detecting and resolving conflicts between top-down stereotypes and bottom-up visual information in person perception. Participants viewed faces congruent and incongruent with stereotypes, via variations in race and emotion, while neural activity was measured using fMRI. Hand movements en route to race/emotion responses were recorded using mouse-tracking to behaviorally index individual differences in stereotypical associations during categorization. The medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC) showed stronger activation to faces that violated stereotypical expectancies at the intersection of multiple social categories (i.e., race and emotion). These regions were highly sensitive to the degree of incongruency, exhibiting linearly increasing responses as race and emotion became stereotypically more incongruent. Further, the ACC exhibited greater functional connectivity with the lateral fusiform cortex, a region implicated in face processing, when viewing stereotypically incongruent (relative to congruent) targets. Finally, participants with stronger behavioral tendencies to link race and emotion stereotypically during categorization showed greater dorsolateral prefrontal cortex activation to stereotypically incongruent targets. Together, the findings provide insight into how conflicting stereotypes at the nexus of multiple social dimensions are resolved at the neural level to accurately perceive other people. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.
Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi
2017-03-01
Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.
Automatic colonic lesion detection and tracking in endoscopic videos
NASA Astrophysics Data System (ADS)
Li, Wenjing; Gustafsson, Ulf; A-Rahim, Yoursif
2011-03-01
The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during regular colonoscopic procedures and demonstrated promising results.
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
Countering MANPADS: study of new concepts and applications
NASA Astrophysics Data System (ADS)
Maltese, Dominique; Robineau, Jacques; Audren, Jean-Thierry; Aragones, Julien; Sailliot, Christophe
2006-05-01
The latest events of ground-to-air Man Portable Air Defense (MANPAD) attacks against aircraft have revealed a new threat both for military and civilian aircraft. Consequently, the implementation of Protecting systems (i.e. Directed InfraRed Counter Measure - DIRCM) in order to face IR guided missiles turns out to be now inevitable. In a near future, aircraft will have to possess detection, tracking, targeting and jamming capabilities to face single and multiple MANPAD threats fired in short-range scenarios from various environments (urban sites, landscape...). In this paper, a practical example of a DIRCM system under study at SAGEM DEFENSE & SECURITY company is presented. The self-protection solution includes built-in and automatic locking-on, tracking, identification and laser jamming capabilities, including defeat assessment. Target Designations are provided by a Missile Warning System. Multiple Target scenarios have been considered to design the system architecture. The article deals with current and future threats (IR seekers of different generations...), scenarios and platforms for system definition. Plus, it stresses on self-protection solutions based on laser jamming capability. Different strategies including target identification, multi band laser, active imagery are described. The self-protection system under study at SAGEM DEFENSE & SECURITY company is also a part of this chapter. Eventually, results of self-protection scenarios are provided for different MANPAD scenarios. Data have been obtained from a simulation software. The results highlight how the system reacts to incoming IR-guided missiles in short time scenarios.
Countering MANPADS: study of new concepts and applications: part two
NASA Astrophysics Data System (ADS)
Maltese, Dominique; Vergnolle, Jean-François; Aragones, Julien; Renaudat, Mathieu
2007-04-01
The latest events of ground-to-air Man Portable Air Defense (MANPAD) attacks against aircraft have revealed a new threat both for military and civilian aircraft. Consequently, the implementation of protecting systems (i.e. Directed Infra Red Counter Measure - DIRCM) in order to face IR guided missiles turns out to be now inevitable. In a near future, aircraft will have to possess detection, tracking, identification, targeting and jamming capabilities to face MANPAD threats. Besides, Multiple Missiles attacks become more and more current scenarios to deal with. In this paper, a practical example of DIRCM systems under study at SAGEM DEFENSE & SECURITY Company is presented. The article is the continuation of a previous SPIE one. Self-protection solutions include built-in and automatic locking-on, tracking, identification and laser jamming capabilities, including defeat assessment. Target Designations are provided by a Missile Warning System. Targets scenarios including multiple threats are considered to design systems architectures. In a first step, the article reminds the context, current and future threats (IR seekers of different generations...), and scenarios for system definition. Then, it focuses on potential self-protection systems under study at SAGEM DEFENSE & SECURITY Company. Different strategies including target identification, multi band laser and active imagery have been previously studied in order to design DIRCM System solutions. Thus, results of self-protection scenarios are provided for different MANPAD scenarios to highlight key problems to solve. Data have been obtained from simulation software modeling full DIRCM systems architectures on technical and operational scenarios (parametric studies).
An extended Kalman filter for mouse tracking.
Choi, Hongjun; Kim, Mingi; Lee, Onseok
2018-05-19
Animal tracking is an important tool for observing behavior, which is useful in various research areas. Animal specimens can be tracked using dynamic models and observation models that require several types of data. Tracking mouse has several barriers due to the physical characteristics of the mouse, their unpredictable movement, and cluttered environments. Therefore, we propose a reliable method that uses a detection stage and a tracking stage to successfully track mouse. The detection stage detects the surface area of the mouse skin, and the tracking stage implements an extended Kalman filter to estimate the state variables of a nonlinear model. The changes in the overall shape of the mouse are tracked using an oval-shaped tracking model to estimate the parameters for the ellipse. An experiment is conducted to demonstrate the performance of the proposed tracking algorithm using six video images showing various types of movement, and the ground truth values for synthetic images are compared to the values generated by the tracking algorithm. A conventional manual tracking method is also applied to compare across eight experimenters. Furthermore, the effectiveness of the proposed tracking method is also demonstrated by applying the tracking algorithm with actual images of mouse. Graphical abstract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polese, Luigi Gentile; Brackney, Larry
An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less
Close to real-time robust pedestrian detection and tracking
NASA Astrophysics Data System (ADS)
Lipetski, Y.; Loibner, G.; Sidla, O.
2015-03-01
Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.
Kloth, Nadine; Shields, Susannah E; Rhodes, Gillian
2014-01-01
The term "own-race bias" refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias.
Kloth, Nadine; Shields, Susannah E.; Rhodes, Gillian
2014-01-01
The term “own-race bias” refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias. PMID:25180902
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Yingliang; Housden, R. James; Razavi, Reza
2013-07-15
Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors presentmore » a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99.2%, and 96.5% for the CS, ablation, and lasso catheters, respectively. Subjective clinical evaluation by three experienced electrophysiologists showed that the detection and tracking results were clinically acceptable.Conclusions: The proposed detection and tracking methods are automatic and can detect and track CS, ablation, and lasso catheters simultaneously and in real-time. The accuracy of the proposed methods is sub-mm and the methods are robust toward low-dose x-ray fluoroscopic images, which are mainly used during EP procedures to maintain low radiation dose.« less
The design and performance of a prototype water Cherenkov optical time-projection chamber
NASA Astrophysics Data System (ADS)
Oberla, Eric; Frisch, Henry J.
2016-04-01
A first experimental test of tracking relativistic charged particles by 'drifting' Cherenkov photons in a water-based optical time-projection chamber (OTPC) has been performed at the Fermilab Test Beam Facility. The prototype OTPC detector consists of a 77 cm long, 28 cm diameter, 40 kg cylindrical water mass instrumented with a combination of commercial 5.1 × 5.1cm2 micro-channel plate photo-multipliers (MCP-PMT) and 6.7 × 6.7cm2 mirrors. Five MCP-PMTs are installed in two columns along the OTPC cylinder in a small-angle stereo configuration. A mirror is mounted opposite each MCP-PMT on the inner surface of the detector cylinder, effectively increasing the photo-detection efficiency and providing a time-resolved image of the Cherenkov light on the opposing wall. Each MCP-PMT is coupled to an anode readout consisting of thirty 50 Ω microstrips. A 180-channel data acquisition system digitizes the MCP-PMT signals on one end of the microstrips using the PSEC4 waveform sampling-and-digitizing chip operating at a sampling rate of 10.24 Gigasamples-per-second. The single-ended microstrip readout determines the time and position of a photon arrival at the face of the MCP-PMT by recording both the direct signal and the pulse reflected from the unterminated far end of the strip. The detector was installed on the Fermilab MCenter secondary beam-line behind a steel absorber where the primary flux is multi-GeV muons. Approximately 80 Cherenkov photons are detected for a through-going muon track in a total event duration of 2 ns. By measuring the time-of-arrival and the position of individual photons at the surface of the detector to ≤ 100 ps and a few mm, respectively, we have measured a spatial resolution of 15 mm for each MCP-PMT track segment, and, from linear fits over the entire track length of 40 cm, an angular resolution on the track direction of 60 mrad.
A-Track: Detecting Moving Objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2017-04-01
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
Acoustic detection and monitoring for transportation infrastructure security.
DOT National Transportation Integrated Search
2009-09-01
Acoustical methods have been extensively used to locate, identify, and track objects underwater. Some of these applications include detecting and tracking submarines, marine mammal detection and identification, detection of mines and ship wrecks and ...
12. Close up view of construction on the downstream face. ...
12. Close up view of construction on the downstream face. Track at lower center conveyed aggregate from the stream bed to the mixing plant. Photographer unknown, October 15, 1924. Source: Salt River Project. - Mormon Flat Dam, On Salt River, Eastern Maricopa County, east of Phoenix, Phoenix, Maricopa County, AZ
ERIC Educational Resources Information Center
Madera, Juan M.; Hebl, Michelle R.
2012-01-01
Drawing from theory and research on perceived stigma (Pryor, Reeder, Yeadon, & Hesson-McInnis, 2004), attentional processes (Rinck & Becker, 2006), working memory (Baddeley & Hitch, 1974), and regulatory resources (Muraven & Baumeister, 2000), the authors examined discrimination against facially stigmatized applicants and the processes involved.…
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
2006-05-15
alarm performance in a cost-effective manner is the use of track - before - detect strategies, in which multiple sensor detections must occur within the...corresponding to the traditional sensor coverage problem. Also, in the track - before - detect context, reference is made to the field-level functions of...detection and false alarm as successful search and false search, respectively, because the track - before - detect process serves as a searching function
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers are presented from the 1995 CAUSE conference track on academic computing and library issues faced by managers of information technology at colleges and universities. The papers include: (1) "Where's the Beef?: Implementation of Discipline-Specific Training on Internet Resources" (Priscilla Hancock and others); (2)…
A Pipeline to the Tenure Track
ERIC Educational Resources Information Center
Roach, Ronald
2009-01-01
Despite U.S. higher education facing a wave of retirements by older baby boomer and World War II-era born professors, there remain large pockets in the academic work force, such as life science faculties at research universities and humanities/social science faculties across all of academia, where tenure-track jobs are scarce and the market is…
Target Tracking in Heavy-Tailed Clutter Using Amplitude Information
2009-07-01
to integrate the data before the detection decision is made, as done in so- called Track - Before - Detect (TBD) [5,14]. For very low SNR, when the target...Processes. McGraw-Hill, 2002. [14] M. G. Rutten, N. J. Gordon, and S. Maskell, “Recur- sive track - before - detect with target amplitude fluctua- tions,” in IEE
Long-term object tracking combined offline with online learning
NASA Astrophysics Data System (ADS)
Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun
2016-04-01
We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
A particle filter for multi-target tracking in track before detect context
NASA Astrophysics Data System (ADS)
Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud
2016-10-01
The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.
Faint Debris Detection by Particle Based Track-Before-Detect Method
NASA Astrophysics Data System (ADS)
Uetsuhara, M.; Ikoma, N.
2014-09-01
This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.
Managing piezoelectric sensor jitter: kinematic position tracking applications
NASA Astrophysics Data System (ADS)
Khomo, Malome T.
2016-02-01
Piezo-acoustic distance tracking sensors have challenges of reporting true distance readings. Challenges include directional anisotropy signal loss in transmission power and in receiver sensitivity, distance-related attenuation of signal and the phase shifts that result in jittery values, some preceding, and others succeeding the expected distance readings. There also exist signal time losses arising from dead time associated with processor latency, with carrier signal pulse length and with voltage rise-time delays in pulse detection. Together these factors cause distance under-reporting, and more critically, makes each reported value uncertain, which is unacceptable in distance-critical applications. Piezo-inertial accelerometers have equivalent if not more severe challenges in tri-axial configurations, for instance where a rotational tilt may happen under linear accelerative force. In the absence of tensor component adaptation to change of orientation, signal is lost until the next axial sensor detects it. Study paper focusses on piezo-acoustic transducers UCD1007 and 400SR160 (40kHz), used in a face-to-face configuration over a 600mm range. Within that range 10 successive phase shift wave fronts were identified, but it took 15 reconstructed wave fronts to uniquely identify a continuous end-to-end jitter-free and slippage-free kinematic data stream from the jittery sensor data. The additional 5 degrees of freedom were consumed by the 5-stage filter applied. The technique has remarkable combinatorial and projective geometry implications for digital sensor design. It is possible for the procedure to be applicable in 3-axis accelerometers and adapted into firmware for truly kinematic device driver interfaces so long as the reporting rates are matched with the user interface refresh rates. It is shown that acoustic transducer sensors require phase loop locking for kinematic continuity whereas gravimetric accelerometers demand better measurement time consistence in sensor values for induced kinematic phase locking.
Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.
Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin
2016-04-14
Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-12-08
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-01-01
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
2010-09-01
We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.
Large scale tracking algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less
Inter-observer variation in identifying mammals from their tracks at enclosed track plate stations
William J. Zielinski; Fredrick V. Schlexer
2009-01-01
Enclosed track plate stations are a common method to detect mammalian carnivores. Studies rely on these data to make inferences about geographic range, population status and detectability. Despite their popularity, there has been no effort to document inter-observer variation in identifying the species that leave their tracks. Four previous field crew leaders...
77 FR 64374 - Notification of Petition for Approval; Port Authority Trans-Hudson Product Safety Plan
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-19
... assigned the petition Docket Number FRA-2012-0075. PATH is upgrading some of its track circuits with Digicode microprocessor-based track circuits. The Digicode track circuit is part of Alstom's Smartway Digital Track Circuit product line and will be used by PATH for train detection and broken rail detection...
Automated vehicle for railway track fault detection
NASA Astrophysics Data System (ADS)
Bhushan, M.; Sujay, S.; Tushar, B.; Chitra, P.
2017-11-01
For the safety reasons, railroad tracks need to be inspected on a regular basis for detecting physical defects or design non compliances. Such track defects and non compliances, if not detected in a certain interval of time, may eventually lead to severe consequences such as train derailments. Inspection must happen twice weekly by a human inspector to maintain safety standards as there are hundreds and thousands of miles of railroad track. But in such type of manual inspection, there are many drawbacks that may result in the poor inspection of the track, due to which accidents may cause in future. So to avoid such errors and severe accidents, this automated system is designed.Such a concept would surely introduce automation in the field of inspection process of railway track and can help to avoid mishaps and severe accidents due to faults in the track.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Location detection and tracking of moving targets by a 2D IR-UWB radar system.
Nguyen, Van-Han; Pyun, Jae-Young
2015-03-19
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekoogar, F; Dowla, F
An IAEA Technical Meeting on Techniques for IAEA Verification of Enrichment Activities identified 'smart tags' as a technology that should be assessed for tracking and locating UF6 cylinders. Although there is vast commercial industry working on RFID systems, the vulnerabilities of commercial products are only beginning to emerge. Most of the commercially off-the-shelf (COTS) RFID systems operate in very narrow frequency bands, making them vulnerable to detection, jamming and tampering and also presenting difficulties when used around metals (i.e. UF6 cylinders). Commercial passive RFID tags have short range, while active RFID tags that provide long ranges have limited lifetimes. Theremore » are also some concerns with the introduction of strong (narrowband) radio frequency signals around radioactive and nuclear materials. Considering the shortcomings of commercial RFID systems, in their current form, they do not offer a promising solution for continuous monitoring and tracking of UF6 cylinders. In this paper, we identify the key challenges faced by commercial RFID systems for monitoring UF6 cylinders, and introduce an ultra-wideband approach for tag/reader communications that addresses most of the identified challenges for IAEA safeguards applications.« less
Using Innate Visual Biases to Guide Face Learning in Natural Scenes: A Computational Investigation
ERIC Educational Resources Information Center
Balas, Benjamin
2010-01-01
Newborn infants appear to possess an innate bias that guides preferential orienting to and tracking of human faces. There is, however, no clear agreement as to the underlying mechanism supporting such a preference. In particular, two competing theories (known as the "structural" and "sensory" hypotheses) conjecture fundamentally different biasing…
ERIC Educational Resources Information Center
Bodily, Robert; Verbert, Katrien
2017-01-01
This article is a comprehensive literature review of student-facing learning analytics reporting systems that track learning analytics data and report it directly to students. This literature review builds on four previously conducted literature reviews in similar domains. Out of the 945 articles retrieved from databases and journals, 93 articles…
Delivery of Hardware for Syracuse University Faculty Loaner Program.
ERIC Educational Resources Information Center
Jares, Terry
This paper describes the Faculty Assistance and Computing Education Services (FACES) loaner program at Syracuse University and the method used by FACES staff to deliver and keep track of hardware, software, and documentation. The roles of the various people involved in the program are briefly discussed, i.e., the administrator, who handles the…
Yang, Fan; Paindavoine, M
2003-01-01
This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.
Measuring Contours of Coal-Seam Cuts
NASA Technical Reports Server (NTRS)
1983-01-01
Angle transducers measure angle between track sections as longwall shearer proceeds along coal face. Distance transducer functions in conjunction with angle transducers to obtain relative angles at known positions. When cut is complete, accumulated data are stored on cassette tape, and track profile is computed and displayed. Micro-processor-based instrument integrates small changes in angle and distance.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Six papers and two abstracts of papers are presented from the 1995 CAUSE conference track on user services issues faced by managers of information technology at colleges and universities. The papers include: (1) "Academic Computing Services: MORE than a Utility" (Scott Bierman and Cathy Smith), which focuses on Carleton College's efforts…
Fast Track, Bush Track: Late Career Female Rural School Leaders Taking the Slow Road
ERIC Educational Resources Information Center
Miller, Judith; Graham, Lorraine; Al-Awiwe, Azhar
2014-01-01
Previous research related to this study explored early career female leaders' experiences in rural school settings, and probed the personal and professional challenges they faced and their motivations to accept formal and informal leadership roles ahead of the usual timeframes (e.g., Graham, Miller & Paterson, 2009). This study set out to…
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers are presented from the 1995 CAUSE conference track on client/server issues faced by managers of information technology at colleges and universities. The papers include: (1) "The Realities of Client/Server Development and Implementation" (Mary Ann Carr and Alan Hartwig), which examines Carnegie Mellon University's transition…
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers are presented from the 1995 CAUSE conference track on strategic planning issues faced by managers of information technology at colleges and universities. The papers include: (1) "Can Small Colleges Afford To Be Technology Leaders? Can They Afford Not To Be? (Martin Ringle and David Smallen); (2) "Strategic Planning Across…
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Seven papers and one abstract of a paper are presented from the 1995 CAUSE conference track on policies and standards issues faced by managers of information technology at colleges and universities. The papers include: (1) "University/College Information System Structures and Policies: Do They Make a Difference? An Initial Assessment"…
ERIC Educational Resources Information Center
Goodsett, Mandi; Walsh, Andrew
2015-01-01
Increasingly, new librarians graduate to face a world of changing technology and new ways of interacting with information. The anxiety of this shifting environment is compounded for tenure-track librarians who must also meet scholarship and instruction requirements that may be unfamiliar to them. One way that librarians can navigate the transition…
In vivo MRI cell tracking using perfluorocarbon probes and fluorine-19 detection
Ahrens, Eric T.; Zhong, Jia
2013-01-01
This article is a brief survey of preclinical in vivo cell tracking methods and applications using perfluorocarbon (PFC) probes and fluorine-19 (19F) MRI detection. Detection of the 19F signal offers high cell specificity and quantification abilities in spin-density weighted MR images. We discuss the compositions of matter, methods, and applications of PFC-based cell tracking using ex vivo and in situ PFC labeling in preclinical studies of inflammation and cellular therapeutics. We will also address potential applicability of 19F cell tracking to clinical trials. PMID:23606473
Space-based IR tracking bias removal using background star observations
NASA Astrophysics Data System (ADS)
Clemons, T. M., III; Chang, K. C.
2009-05-01
This paper provides the results of a proposed methodology for removing sensor bias from a space-based infrared (IR) tracking system through the use of stars detected in the background field of the tracking sensor. The tracking system consists of two satellites flying in a lead-follower formation tracking a ballistic target. Each satellite is equipped with a narrow-view IR sensor that provides azimuth and elevation to the target. The tracking problem is made more difficult due to a constant, non-varying or slowly varying bias error present in each sensor's line of sight measurements. As known stars are detected during the target tracking process, the instantaneous sensor pointing error can be calculated as the difference between star detection reading and the known position of the star. The system then utilizes a separate bias filter to estimate the bias value based on these detections and correct the target line of sight measurements to improve the target state vector. The target state vector is estimated through a Linearized Kalman Filter (LKF) for the highly non-linear problem of tracking a ballistic missile. Scenarios are created using Satellite Toolkit(C) for trajectories with associated sensor observations. Mean Square Error results are given for tracking during the period when the target is in view of the satellite IR sensors. The results of this research provide a potential solution to bias correction while simultaneously tracking a target.
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
Novel face-detection method under various environments
NASA Astrophysics Data System (ADS)
Jing, Min-Quan; Chen, Ling-Hwei
2009-06-01
We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.
Search Radar Track-Before-Detect Using the Hough Transform.
1995-03-01
before - detect processing method which allows previous data to help in target detection. The technique provides many advantages compared to...improved target detection scheme, applicable to search radars, using the Hough transform image processing technique. The system concept involves a track
Sensor Management for Fighter Applications
2006-06-01
has consistently shown that by directly estimating the prob- ability density of a target state using a track - before - detect scheme, weak and densely... track - before - detect nonlinear filter was constructed to estimate the joint density of all state variables. A simulation that emulates estimator...targets in clutter and noise from sensed kinematic and identity data. Among the most capable is track - before - detect (TBD), which delivers
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Roadway Worker Protection... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Roadway Worker Protection... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Roadway Worker Protection... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
49 CFR 214.337 - On-track safety procedures for lone workers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Roadway Worker Protection... performing routine inspection or minor correction may use individual train detection to establish on-track... worker retains an absolute right to use on-track safety procedures other than individual train detection...
NASA Astrophysics Data System (ADS)
Pace, Paul W.; Sutherland, John
2001-10-01
This project is aimed at analyzing EO/IR images to provide automatic target detection/recognition/identification (ATR/D/I) of militarily relevant land targets. An increase in performance was accomplished using a biomimetic intelligence system functioning on low-cost, commercially available processing chips. Biomimetic intelligence has demonstrated advanced capabilities in the areas of hand- printed character recognition, real-time detection/identification of multiple faces in full 3D perspectives in cluttered environments, advanced capabilities in classification of ground-based military vehicles from SAR, and real-time ATR/D/I of ground-based military vehicles from EO/IR/HRR data in cluttered environments. The investigation applied these tools to real data sets and examined the parameters such as the minimum resolution for target recognition, the effect of target size, rotation, line-of-sight changes, contrast, partial obscuring, background clutter etc. The results demonstrated a real-time ATR/D/I capability against a subset of militarily relevant land targets operating in a realistic scenario. Typical results on the initial EO/IR data indicate probabilities of correct classification of resolved targets to be greater than 95 percent.
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
NASA Astrophysics Data System (ADS)
Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.
2007-02-01
Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.; Mansuripur, M.
1992-01-01
A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).
Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition
NASA Astrophysics Data System (ADS)
Khayat, Omid; Afarideh, Hossein
2013-04-01
Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.
Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Smirnova, Z. N.
2015-05-01
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M
2017-06-01
This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The software processes recorded thermal video and detects the flight tracks of birds and bats that passed through the camera's field of view. The output is a set of images that show complete flight tracks for any detections, with the direction of travel indicated and the thermal image of the animal delineated. A report of the descriptive features of each detected track is also output in the form of a comma-separated value text file.
Activity Tracking for Pilot Error Detection from Flight Data
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Ashford, Rose (Technical Monitor)
2002-01-01
This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid
Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar
2016-01-01
Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274
Discriminative correlation filter tracking with occlusion detection
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Chen, Zhong; Yu, XiPeng; Zhang, Ting; He, Jing
2018-03-01
Aiming at the problem that the correlation filter-based tracking algorithm can not track the target of severe occlusion, a target re-detection mechanism is proposed. First of all, based on the ECO, we propose the multi-peak detection model and the response value to distinguish the occlusion and deformation in the target tracking, which improve the success rate of tracking. And then we add the confidence model to update the mechanism to effectively prevent the model offset problem which due to similar targets or background during the tracking process. Finally, the redetection mechanism of the target is added, and the relocation is performed after the target is lost, which increases the accuracy of the target positioning. The experimental results demonstrate that the proposed tracker performs favorably against state-of-the-art methods in terms of robustness and accuracy.
Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.
Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar
2016-02-19
Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.
Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo
2012-01-01
Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.
Can Children with Autism Spectrum Disorders "Hear" a Speaking Face?
ERIC Educational Resources Information Center
Irwin, Julia R.; Tornatore, Lauren A.; Brancazio, Lawrence; Whalen, D. H.
2011-01-01
This study used eye-tracking methodology to assess audiovisual speech perception in 26 children ranging in age from 5 to 15 years, half with autism spectrum disorders (ASD) and half with typical development. Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced…
Seeing faces is necessary for face-domain formation.
Arcaro, Michael J; Schade, Peter F; Vincent, Justin L; Ponce, Carlos R; Livingstone, Margaret S
2017-10-01
Here we report that monkeys raised without exposure to faces did not develop face domains, but did develop domains for other categories and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore, experience must be necessary for the formation (or maintenance) of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face domains, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses toward particular retinotopic representations, thereby leading to domain formation in stereotyped locations in inferotemporal cortex, without requiring category-specific templates or biases. Thus, we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation.
Seeing faces is necessary for face-patch formation
Arcaro, Michael J.; Schade, Peter F.; Vincent, Justin L.; Ponce, Carlos R.; Livingstone, Margaret S.
2017-01-01
Here we report that monkeys raised without exposure to faces did not develop face patches, but did develop domains for other categories, and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore experience must be necessary for the formation, or maintenance, of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face patches, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses towards particular retinotopic representations, thereby leading to domain formation in stereotyped locations in IT, without requiring category-specific templates or biases. Thus we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation. PMID:28869581
Duque, Almudena; Vázquez, Carmelo
2015-03-01
According to cognitive models, attentional biases in depression play key roles in the onset and subsequent maintenance of the disorder. The present study examines the processing of emotional facial expressions (happy, angry, and sad) in depressed and non-depressed adults. Sixteen unmedicated patients with Major Depressive Disorder (MDD) and 34 never-depressed controls (ND) completed an eye-tracking task to assess different components of visual attention (orienting attention and maintenance of attention) in the processing of emotional faces. Compared to ND, participants with MDD showed a negative attentional bias in attentional maintenance indices (i.e. first fixation duration and total fixation time) for sad faces. This attentional bias was positively associated with the severity of depressive symptoms. Furthermore, the MDD group spent a marginally less amount of time viewing happy faces compared with the ND group. No differences were found between the groups with respect to angry faces and orienting attention indices. The current study is limited by its cross-sectional design. These results support the notion that attentional biases in depression are specific to depression-related information and that they operate in later stages in the deployment of attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
A&M. Outdoor turntable. Aerial view of trackage as of 1954. ...
A&M. Outdoor turntable. Aerial view of trackage as of 1954. Camera faces northeast along line of track heading for the IET. Upper set of east/west tracks head for the hot shop; the other, for the cold shop. Date: November 24, 1954. INEEL negative no. 13203 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Probabilistic track coverage in cooperative sensor networks.
Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A
2010-12-01
The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.
The IBM HeadTracking Pointer: improvements in vision-based pointer control.
Kjeldsen, Rick
2008-07-01
Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices--the intended audience of such systems--have no alternative if the automatic bootstrapping process fails. There is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes the IBM HeadTracking Pointer, a system which attempts to directly address some of these issues. Head gestures are used to provide the end user a greater level of autonomous control over the system. A novel face-tracking algorithm reduces drift under variable lighting conditions, allowing the use of absolute, rather than relative, pointer positioning. Most importantly, the pointer dynamics have been designed to take into account the constraints of head-based pointing, with a non-linear gain which allows stability in fine pointer movement, high speed on long transitions and adjustability to support users with different movement dynamics. User studies have identified some difficulties with training the system and some characteristics of the pointer motion that take time to get used to, but also good user feedback and very promising performance results.
NASA Astrophysics Data System (ADS)
Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang
2018-01-01
Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
Object Acquisition and Tracking for Space-Based Surveillance
1991-11-27
on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect , and can...smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Automation and quality assurance of the production cycle
NASA Astrophysics Data System (ADS)
Hajdu, L.; Didenko, L.; Lauret, J.
2010-04-01
Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With the advent of grid computing the parallel processing power has increased but traceability has also become increasing problematic due to the heterogeneous nature of Grids. Any one of a number of components may fail invalidating the job or execution flow in various stages of completion and re-submission of a few of the multitude of jobs (keeping the entire dataset production consistency) a difficult and tedious process. From the definition of the workflow to its execution, there is a strong need for validation, tracking, monitoring and reporting of problems. To ease the process of requesting production workflow, STAR has implemented several components addressing the full workflow consistency. A Web based online submission request module, implemented using Drupal's Content Management System API, enforces ahead that all parameters are described in advance in a uniform fashion. Upon submission, all jobs are independently tracked and (sometime experiment-specific) discrepancies are detected and recorded providing detailed information on where/how/when the job failed. Aggregate information on success and failure are also provided in near real-time.
2016-10-01
ARL-TR-7846 ● OCT 2016 US Army Research Laboratory Application of Hybrid Along-Track Interferometry/ Displaced Phase Center...Research Laboratory Application of Hybrid Along-Track Interferometry/ Displaced Phase Center Antenna Method for Moving Human Target Detection...TYPE Technical Report 3. DATES COVERED (From - To) 2015–2016 4. TITLE AND SUBTITLE Application of Hybrid Along-Track Interferometry/ Displaced
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
Face liveness detection using shearlet-based feature descriptors
NASA Astrophysics Data System (ADS)
Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang
2016-07-01
Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.
Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks
2006-09-01
time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by
Obstacle penetrating dynamic radar imaging system
Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA
2006-12-12
An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.
Face liveness detection for face recognition based on cardiac features of skin color image
NASA Astrophysics Data System (ADS)
Suh, Kun Ha; Lee, Eui Chul
2016-07-01
With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.
Seals Research at AlliedSignal
NASA Technical Reports Server (NTRS)
Ullah, M. Rifat
1996-01-01
A consortium has been formed to address seal problems in the Aerospace sector of Allied Signal, Inc. The consortium is represented by makers of Propulsion Engines, Auxiliary Power Units, Gas Turbine Starters, etc. The goal is to improve Face Seal reliability, since Face Seals have become reliability drivers in many of our product lines. Several research programs are being implemented simultaneously this year. They include: Face Seal Modeling and Analysis Methodology; Oil Cooling of Seals; Seal Tracking Dynamics; Coking Formation & Prevention; and Seal Reliability Methods.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
Comparison of direct and heterodyne detection optical intersatellite communication links
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1987-01-01
The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space
NASA Astrophysics Data System (ADS)
Jun, Chen; Wenjun, Hou; Qing, Sheng
After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.
Research on the Filtering Algorithm in Speed and Position Detection of Maglev Trains
Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song
2011-01-01
This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally. PMID:22164012
Research on the filtering algorithm in speed and position detection of maglev trains.
Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song
2011-01-01
This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.
Live face detection based on the analysis of Fourier spectra
NASA Astrophysics Data System (ADS)
Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.
2004-08-01
Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
NASA Astrophysics Data System (ADS)
Falzone, Nadia; Myhra, Sverre; Chakalova, Radka; Hill, Mark A.; Thomson, James; Vallis, Katherine A.
2013-11-01
The interactions between energetic ions and biological and/or organic target materials have recently attracted theoretical and experimental attention, due to their implications for detector and device technologies, and for therapeutic applications. Most of the attention has focused on detection of the primary ionization tracks, and their effects, while recoil target atom tracks remain largely unexplored. Detection of tracks by a negative tone photoresist (SU-8), followed by standard development, in combination with analysis by atomic force microscopy, shows that both primary and recoil tracks are revealed as conical spikes, and can be characterized at high spatial resolution. The methodology has the potential to provide detailed information about single impact events, which may lead to more effective and informative detector technologies and advanced therapeutic procedures. In comparison with current characterization methods the advantageous features include: greater spatial resolution by an order of magnitude (20 nm) detection of single primary and associated recoil tracks; increased range of fluence (to 2.5 × 109 cm-2) sensitivity to impacts at grazing angle incidence; and better definition of the lateral interaction volume in target materials.
Bird Radar Validation in the Field by Time-Referencing Line-Transect Surveys
Dokter, Adriaan M.; Baptist, Martin J.; Ens, Bruno J.; Krijgsveld, Karen L.; van Loon, E. Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar’s detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer’s accuracy in determining a bird’s transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ∼1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50±0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms. PMID:24066103
Bird radar validation in the field by time-referencing line-transect surveys.
Dokter, Adriaan M; Baptist, Martin J; Ens, Bruno J; Krijgsveld, Karen L; van Loon, E Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar's detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer's accuracy in determining a bird's transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ~1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50 ± 0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms.
Detection limit of a VCO based detection chain dedicated to particles recognition and tracking
NASA Astrophysics Data System (ADS)
Coulié, K.; Rahajandraibe, W.; Aziza, H.; Micolau, G.; Vauché, R.
2018-01-01
A particle detection chain based on CMOS-SOI VCO circuit is presented. The solution is used for the recognition and the tracking of a given particle at circuit level. TCAD simulation of the detector has been performed on a 3×3 matrix of diodes based detector for particles recognition and tracking. The current response of the detector has been used for a case study in order to determine the ability of the chain to recognize an alpha particle crossing a 3×3 detection cell. The detection limit of the proposed solution is investigated and discussed in this paper.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Detection of Ballast Damage by In-Situ Vibration Measurement of Sleepers
NASA Astrophysics Data System (ADS)
Lam, H. F.; Wong, M. T.; Keefe, R. M.
2010-05-01
Ballasted track is one of the most important elements of railway transportation systems worldwide. Owing to its importance in railway safety, many monitoring and evaluation methods have been developed. Current railway track monitoring systems are comprehensive, fast and efficient in testing railway track level and alignment, rail gauge, rail corrugation, etc. However, the monitoring of ballast condition still relies very much on visual inspection and core tests. Although extensive research has been carried out in the development of non-destructive methods for ballast condition evaluation, a commonly accepted and cost-effective method is still in demand. In Hong Kong practice, if abnormal train vibration is reported by the train operator or passengers, permanent way inspectors will locate the problem area by track geometry measurement. It must be pointed out that visual inspection can only identify ballast damage on the track surface, the track geometry deficiencies and rail twists can be detected using a track gauge. Ballast damage under the sleeper loading area and the ballast shoulder, which are the main factors affecting track stability and ride quality, are extremely difficult if not impossible to be detected by visual inspection. Core test is a destructive test, which is expensive, time consuming and may be disruptive to traffic. A fast real-time ballast damage detection method that can be implemented by permanent way inspectors with simple equipment can certainly provide valuable information for engineers in assessing the safety and riding quality of ballasted track systems. The main objective of this paper is to study the feasibility in using the vibration characteristics of sleepers in quantifying the ballast condition under the sleepers, and so as to explore the possibility in developing a handy method for the detection of ballast damage based on the measured vibration of sleepers.
Toward automated face detection in thermal and polarimetric thermal imagery
NASA Astrophysics Data System (ADS)
Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.
2016-05-01
Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.
Real-time detecting and tracking ball with OpenCV and Kinect
NASA Astrophysics Data System (ADS)
Osiecki, Tomasz; Jankowski, Stanislaw
2016-09-01
This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.
An object detection and tracking system for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao
2017-10-01
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Educational Tracking and Juvenile Deviance in Taiwan: Direct Effect, Indirect Effect, or Both.
Lin, Wen-Hsu; Yi, Chin-Chun
2016-02-01
Educational tracking in Chinese society is quite different from that in Western society, in that the allocation to either the vocational or academic track is based on a national entrance examination, which happens at ninth grade (age 14-15). Hence, students in many Asian countries (e.g., China and Taiwan) have to face academic tracking in early adolescence. Because of cultural emphasis on education in Taiwan, the impact of tracking on deviance is profound and can be seen as a crucial life-event. With this concept in mind, we examine how educational tracking influences adolescent deviance during high school. In addition, we also examine how educational tracking may indirectly influence deviance through other life domains, including depression, delinquent peer association, and school attachment. By using longitudinal data (the Taiwan Youth Project), we find that educational tracking increases deviance not only directly but also indirectly through delinquent peers and low school attachment. Some implications and limitations are also discussed. © The Author(s) 2014.
The Struggle to Pass Algebra: Online vs. Face-to-Face Credit Recovery for At-Risk Urban Students
ERIC Educational Resources Information Center
Heppen, Jessica B.; Sorensen, Nicholas; Allensworth, Elaine; Walters, Kirk; Rickles, Jordan; Taylor, Suzanne Stachel; Michelman, Valerie
2017-01-01
Students who fail algebra are significantly less likely to graduate on time, and algebra failure rates are consistently high in urban districts. Identifying effective credit recovery strategies is critical for getting students back on track. Online courses are now widely used for credit recovery, yet there is no rigorous evidence about the…
Laser heterodyne surface profiler
Sommargren, Gary E.
1982-01-01
A method and apparatus is disclosed for testing the deviation of the face of an object from a flat smooth surface using a beam of coherent light of two plane-polarized components, one of a frequency constantly greater than the other by a fixed amount to produce a difference frequency with a constant phase to be used as a reference. The beam also is split into its two components with the separate components directed onto spaced apart points onthe face of the object to be tested for smoothness. The object is rotated on an axis coincident with one component which is directed to the face of the object at the center which constitutes a virtual fixed point. This component also is used as a reference. The other component follows a circular track on the face of the object as the object is rotated. The two components are recombined after reflection to produce a reflected frequency difference of a phase proportional to the difference in path length which is compared with the reference phase to produce a signal proportional to the deviation of the height of the surface along the circular track with respect to the fixed point at the center.
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan
2016-09-01
The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.
ERIC Educational Resources Information Center
Doss, Khalilah
2016-01-01
At most institutions, track and field can function as the redheaded stepchild of athletic programs because these sports do not draw the revenue nor get the crowds often associated with college football or basketball. Nevertheless, there are multiple correlations common among all college student athletes. Primarily, all student athletes face the…
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Person detection, tracking and following using stereo camera
NASA Astrophysics Data System (ADS)
Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping
2018-04-01
Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Mateus, Ana Rita A; Grilo, Clara; Santos-Reis, Margarida
2011-10-01
Environmental assessment studies often evaluate the effectiveness of drainage culverts as habitat linkages for species, however, the efficiency of the sampling designs and the survey methods are not known. Our main goal was to estimate the most cost-effective monitoring method for sampling carnivore culvert using track-pads and video-surveillance. We estimated the most efficient (lower costs and high detection success) interval between visits (days) when using track-pads and also determined the advantages of using each method. In 2006, we selected two highways in southern Portugal and sampled 15 culverts over two 10-day sampling periods (spring and summer). Using the track-pad method, 90% of the animal tracks were detected using a 2-day interval between visits. We recorded a higher number of crossings for most species using video-surveillance (n = 129) when compared with the track-pad technique (n = 102); however, the detection ability using the video-surveillance method varied with type of structure and species. More crossings were detected in circular culverts (1 m and 1.5 m diameter) than in box culverts (2 m to 4 m width), likely because video cameras had a reduced vision coverage area. On the other hand, carnivore species with small feet such as the common genet Genetta genetta were detected less often using the track-pad surveying method. The cost-benefit analyzes shows that the track-pad technique is the most appropriate technique, but video-surveillance allows year-round surveys as well as the behavior response analyzes of species using crossing structures.
Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-02-01
Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.
The Effect of Early Visual Deprivation on the Development of Face Detection
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne
2013-01-01
The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…
Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture
2003-09-30
multistatic active detection, and incorporated this characterization into a Bayesian track - before - detect system called, the Likelihood Ratio Tracker (LRT...prediction uncertainty in a track before detect system for multistatic active sonar. The approach has worked well on limited simulation data. IMPACT
Infrared target tracking via weighted correlation filter
NASA Astrophysics Data System (ADS)
He, Yu-Jie; Li, Min; Zhang, JinLi; Yao, Jun-Ping
2015-11-01
Design of an effective target tracker is an important and challenging task for many applications due to multiple factors which can cause disturbance in infrared video sequences. In this paper, an infrared target tracking method under tracking by detection framework based on a weighted correlation filter is presented. This method consists of two parts: detection and filtering. For the detection stage, we propose a sequential detection method for the infrared target based on low-rank representation. For the filtering stage, a new multi-feature weighted function which fuses different target features is proposed, which takes the importance of the different regions into consideration. The weighted function is then incorporated into a correlation filter to compute a confidence map more accurately, in order to indicate the best target location based on the detection results obtained from the first stage. Extensive experimental results on different video sequences demonstrate that the proposed method performs favorably for detection and tracking compared with baseline methods in terms of efficiency and accuracy.
Approaches, field considerations and problems associated with radio tracking carnivores
Sargeant, A.B.; Amlaner, C. J.; MacDonald, D.W.
1979-01-01
The adaptation of radio tracking to ecological studies was a major technological advance affecting field investigations of animal movements and behavior. Carnivores have been the recipients of much attention with this new technology and study approaches have varied from simple to complex. Equipment performance has much improved over the years, but users still face many difficulties. The beginning of all radio tracking studies should be a precise definition of objectives. Study objectives dictate type of gear required and field procedures. Field conditions affect equipment performance and investigator ability to gather data. Radio tracking carnivores is demanding and generally requires greater time than anticipated. Problems should be expected and planned for in study design. Radio tracking can be an asset in carnivore studies but caution is needed in its application.
Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-10-29
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Zhu, De-Sheng; Fu, Jue; Zhang, Yi; Xie, Chong; Wang, Xiao-Qing; Zhang, Yue; Yang, Jie; Li, Shi-Xu; Liu, Xiao-Bei; Wan, Zhi-Wen; Dong, Qiang; Guan, Yang-Tai
2015-01-01
Background Transverse sinus stenosis (TSS) is common among patients with cerebral venous sinus thrombosis. No previous studies have reported on double-track sign detected on axial Gd-enhanced T1WI in TSS. This study aimed to determine the sensitivity and specificity of the double-track sign in the detection of TSS. Methods We retrospectively reviewed medical records of 383 patients with transverse sinus thrombosis (TST) and 30 patients with normal transverse sinus from 5 participating hospitals in china from January 2008 to June 2014. 167 feasible transverse sinuses included in this study were categorized into TSS (n = 76), transverse sinus occlusion (TSO) (n = 52) and transverse sinus normal (TSN) groups (n = 39) according to imaging diagnosis on digital subtraction angiography (DSA) or magnetic resonance venography (MRV). Double-track sign on axial Gd-enhanced T1WI was compared among the three groups. Sensitivity and specificity of double-track sign in detection of TSS were calculated, with final imaging diagnosis of TSS on DSA or MRV as the reference standard. Results Of 383 patients with TST recruited over a 6.5-year period, 128 patients were enrolled in the study, 255 patients were excluded because of insufficient clinical data, imaging finding and delay time, and 30 matched patients with normal transverse sinus were enrolled in the control group. Therefore, double-track sign assessment was conducted in 167 available transverse sinuses of 158 patients. Of the 76 sinuses in TSS group, 51 had double-track sign. Of the other 91 sinuses in TSO and TSN groups, 3 had a false-positive double-track sign. Thus, double-track sign on axial Gd-enhanced T1WI was 67.1% (95% CI 55.3–77.2) sensitive and 96.7% (95% CI 89.9–99.1) specific for detection of TSS. Conclusions The double-track sign on axial Gd-enhanced T1WI is highly specific and moderate sensitive for detection of TSS. Nevertheless, it could be a direct sign and might provide an early clue for TSS. PMID:26291452
Zhu, De-Sheng; Fu, Jue; Zhang, Yi; Xie, Chong; Wang, Xiao-Qing; Zhang, Yue; Yang, Jie; Li, Shi-Xu; Liu, Xiao-Bei; Wan, Zhi-Wen; Dong, Qiang; Guan, Yang-Tai
2015-01-01
Transverse sinus stenosis (TSS) is common among patients with cerebral venous sinus thrombosis. No previous studies have reported on double-track sign detected on axial Gd-enhanced T1WI in TSS. This study aimed to determine the sensitivity and specificity of the double-track sign in the detection of TSS. We retrospectively reviewed medical records of 383 patients with transverse sinus thrombosis (TST) and 30 patients with normal transverse sinus from 5 participating hospitals in china from January 2008 to June 2014. 167 feasible transverse sinuses included in this study were categorized into TSS (n = 76), transverse sinus occlusion (TSO) (n = 52) and transverse sinus normal (TSN) groups (n = 39) according to imaging diagnosis on digital subtraction angiography (DSA) or magnetic resonance venography (MRV). Double-track sign on axial Gd-enhanced T1WI was compared among the three groups. Sensitivity and specificity of double-track sign in detection of TSS were calculated, with final imaging diagnosis of TSS on DSA or MRV as the reference standard. Of 383 patients with TST recruited over a 6.5-year period, 128 patients were enrolled in the study, 255 patients were excluded because of insufficient clinical data, imaging finding and delay time, and 30 matched patients with normal transverse sinus were enrolled in the control group. Therefore, double-track sign assessment was conducted in 167 available transverse sinuses of 158 patients. Of the 76 sinuses in TSS group, 51 had double-track sign. Of the other 91 sinuses in TSO and TSN groups, 3 had a false-positive double-track sign. Thus, double-track sign on axial Gd-enhanced T1WI was 67.1% (95% CI 55.3-77.2) sensitive and 96.7% (95% CI 89.9-99.1) specific for detection of TSS. The double-track sign on axial Gd-enhanced T1WI is highly specific and moderate sensitive for detection of TSS. Nevertheless, it could be a direct sign and might provide an early clue for TSS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin
Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less
Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa
2016-08-01
For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.
Robust online tracking via adaptive samples selection with saliency detection
NASA Astrophysics Data System (ADS)
Yan, Jia; Chen, Xi; Zhu, QiuPing
2013-12-01
Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.
NASA Astrophysics Data System (ADS)
Donde, Oscar Omondi; Tian, Cuicui; Xiao, Bangding
2017-11-01
The presence of feacal-derived pathogens in water is responsible for several infectious diseases and deaths worldwide. As a solution, sources of fecal pollution in waters must be accurately assessed, properly determined and strictly controlled. However, the exercise has remained challenging due to the existing overlapping characteristics by different members of faecal coliform bacteria and the inadequacy of information pertaining to the contribution of seasonality and weather condition on tracking the possible sources of pollution. There are continued efforts to improve the Faecal Contamination Source Tracking (FCST) techniques such as Microbial Source Tracking (MST). This study aimed to make contribution to MST by evaluating the efficacy of combining site specific quantification of faecal contamination indicator bacteria and detection of DNA markers while accounting for seasonality and weather conditions' effects in tracking the major sources of faecal contamination in a freshwater system (Donghu Lake, China). The results showed that the use of cyd gene in addition to lacZ and uidA genes differentiates E. coli from other closely related faecal bacteria. The use of selective media increases the pollution source tracking accuracy. BSA addition boosts PCR detection and increases FCST efficiency. Seasonality and weather variability also influence the detection limit for DNA markers.
Vabalas, Andrius; Freeth, Megan
2016-01-01
The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking to the social partner overall, nor with reduced looking to the face. However, individuals who were high in autistic traits exhibited reduced visual exploration during the face-to-face interaction overall, as demonstrated by shorter and less frequent saccades. Visual exploration was not related to social anxiety. This study suggests that there are systematic individual differences in visual exploration during social interactions and these are related to amount of autistic traits.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Radar Detection of Marine Mammals
2010-09-30
associative tracker using the Munkres algorithm was used. This was then expanded to include a track - before - detect algorithm, the Baysean Field...small, slow moving objects (i.e. whales). In order to address the third concern (M2 mode), we have tested using a track - before - detect tracker termed
Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf
2014-02-01
Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.
Accurate measurement of imaging photoplethysmographic signals based camera using weighted average
NASA Astrophysics Data System (ADS)
Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji
2018-01-01
Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
Automated Historical and Real-Time Cyclone Discovery With Multimodal Remote Satellite Measurements
NASA Astrophysics Data System (ADS)
Ho, S.; Talukder, A.; Liu, T.; Tang, W.; Bingham, A.
2008-12-01
Existing cyclone detection and tracking solutions involve extensive manual analysis of modeled-data and field campaign data by teams of experts. We have developed a novel automated global cyclone detection and tracking system by assimilating and sharing information from multiple remote satellites. This unprecedented solution of combining multiple remote satellite measurements in an autonomous manner allows leveraging off the strengths of each individual satellite. Use of multiple satellite data sources also results in significantly improved temporal tracking accuracy for cyclones. Our solution involves an automated feature extraction and machine learning technique based on an ensemble classifier and Kalman filter for cyclone detection and tracking from multiple heterogeneous satellite data sources. Our feature-based methodology that focuses on automated cyclone discovery is fundamentally different from, and actually complements, the well-known Dvorak technique for cyclone intensity estimation (that often relies on manual detection of cyclonic regions) from field and remote data. Our solution currently employs the QuikSCAT wind measurement and the merged level 3 TRMM precipitation data for automated cyclone discovery. Assimilation of other types of remote measurements is ongoing and planned in the near future. Experimental results of our automated solution on historical cyclone datasets demonstrate the superior performance of our automated approach compared to previous work. Performance of our detection solution compares favorably against the list of cyclones occurring in North Atlantic Ocean for the 2005 calendar year reported by the National Hurricane Center (NHC) in our initial analysis. We have also demonstrated the robustness of our cyclone tracking methodology in other regions over the world by using multiple heterogeneous satellite data for detection and tracking of three arbitrary historical cyclones in other regions. Our cyclone detection and tracking methodology can be applied to (i) historical data to support Earth scientists in climate modeling, cyclonic-climate interactions, and obtain a better understanding of the cause and effects of cyclone (e.g. cyclo-genesis), and (ii) automatic cyclone discovery in near real-time using streaming satellite to support and improve the planning of global cyclone field campaigns. Additional satellite data from GOES and other orbiting satellites can be easily assimilated and integrated into our automated cyclone detection and tracking module to improve the temporal tracking accuracy of cyclones down to ½ hr and reduce the incidence of false alarms.
Bodenschatz, Charlott Maria; Skopinceva, Marija; Kersting, Anette; Quirin, Markus; Suslow, Thomas
2018-04-04
Cognitive theories of depression assume biased attention towards mood-congruent information as a central vulnerability and maintaining factor. Among other symptoms, depression is characterized by excessive negative affect (NA). Yet, little is known about the impact of naturally occurring NA on the allocation of attention to emotional information. The study investigates how implicit and explicit NA as well as self-reported depressive symptoms predict attentional biases in a sample of healthy individuals (N = 104). Attentional biases were assessed using eye-tracking during a free viewing task in which images of sad, angry, happy and neutral faces were shown simultaneously. Participants' implicit affectivity was measured indirectly using the Implicit Positive and Negative Affect Test. Questionnaires were administered to assess actual and habitual explicit NA and presence of depressive symptoms. Higher levels of depressive symptoms were associated with sustained attention to sad faces and reduced attention to happy faces. Implicit but not explicit NA significantly predicted gaze behavior towards sad faces independently from depressive symptoms. The present study supports the idea that naturally occurring implicit NA is associated with attention allocation to dysphoric facial expression. The findings demonstrate the utility of implicit affectivity measures in studying individual differences in depression-relevant attentional biases and cognitive vulnerability. Copyright © 2018 Elsevier B.V. All rights reserved.
Hills, Peter J; Eaton, Elizabeth; Pake, J Michael
2016-01-01
Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.
Nanoparticles and clinically applicable cell tracking
Guenoun, Jamal; van Tiel, Sandra T; Krestin, Gabriel P
2015-01-01
In vivo cell tracking has emerged as a much sought after tool for design and monitoring of cell-based treatment strategies. Various techniques are available for pre-clinical animal studies, from which much has been learned and still can be learned. However, there is also a need for clinically translatable techniques. Central to in vivo cell imaging is labelling of cells with agents that can give rise to signals in vivo, that can be detected and measured non-invasively. The current imaging technology of choice for clinical translation is MRI in combination with labelling of cells with magnetic agents. The main challenge encountered during the cell labelling procedure is to efficiently incorporate the label into the cell, such that the labelled cells can be imaged at high sensitivity for prolonged periods of time, without the labelling process affecting the functionality of the cells. In this respect, nanoparticles offer attractive features since their structure and chemical properties can be modified to facilitate cellular incorporation and because they can carry a high payload of the relevant label into cells. While these technologies have already been applied in clinical trials and have increased the understanding of cell-based therapy mechanism, many challenges are still faced. PMID:26248872
Automatic tracking of wake vortices using ground-wind sensor data
DOT National Transportation Integrated Search
1977-01-03
Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...
ERIC Educational Resources Information Center
Tian, Mei; Lu, Genshu
2017-01-01
This study explores the challenges faced by young lecturers in managerial transformation in elite Chinese academic institutions which aim to develop into world-class universities. Drawing on data from in-depth interviews, the paper discusses how a group of lecturers on tenure-track contracts at a research university in China perceived the impacts…
ERIC Educational Resources Information Center
Xu, Di; Jaggars, Shanna Smith
2011-01-01
This report investigates enrollment patterns and academic outcomes in online, hybrid, and face-to-face courses among students who enrolled in Washington State community and technical colleges in the fall of 2004. Students were tracked for nearly five years, until the spring of 2009. Results were similar to those found in a parallel study in…
ERIC Educational Resources Information Center
Wagner, Jennifer; Luyster, Rhiannon J.; Moustapha, Hana; Tager-Flusberg, Helen; Nelson, Charles Alexander
2018-01-01
A growing body of literature has begun to explore social attention in infant siblings of children with autism spectrum disorder (ASD) with hopes of identifying early differences that are associated with later ASD or other aspects of development. The present study used eye-tracking to familiar (mother) and unfamiliar (stranger) faces in two groups…
ERIC Educational Resources Information Center
Hwu, Fenfang
2013-01-01
Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…
Evaluation of the Sony GDM-FW900 16:10 Aspect Ratio, 24-Inch Diagonal Flat Face CRT Color Monitor
2001-09-06
Color Gamut ....................................................................................... 46 II. 23 Color Tracking...daylight color imagery. Other color features include: variable RGB gain/bias, the sRGB color display system. Adjusting the color temperature somewhat...delta u’v’ Pass Color Tracking Not specified Less than 0.013 delta u’v’ between Lmin to Lmax Color Gamut Area Not specified 27% Pixel aspect
AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.
Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco
2016-12-01
Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
Long-term scale adaptive tracking with kernel correlation filters
NASA Astrophysics Data System (ADS)
Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui
2018-04-01
Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.
Tracking Vessels to Illegal Pollutant Discharges Using Multisource Vessel Information
NASA Astrophysics Data System (ADS)
Busler, J.; Wehn, H.; Woodhouse, L.
2015-04-01
Illegal discharge of bilge waters is a significant source of oil and other environmental pollutants in Canadian and international waters. Imaging satellites are commonly used to monitor large areas to detect oily discharges from vessels, off-shore platforms and other sources. While remotely sensed imagery provides a snap-shot picture useful for detecting a spill or the presence of vessels in the vicinity, it is difficult to directly associate a vessel to an observed spill unless the vessel is observed while the discharge is occurring. The situation then becomes more challenging with increased vessel traffic as multiple vessels may be associated with a spill event. By combining multiple sources of vessel location data, such as Automated Information Systems (AIS), Long Range Identification and Tracking (LRIT) and SAR-based ship detection, with spill detections and drift models we have created a system that associates detected spill events with vessels in the area using a probabilistic model that intersects vessel tracks and spill drift trajectories in both time and space. Working with the Canadian Space Agency and the Canadian Ice Service's Integrated Satellite Tracking of Pollution (ISTOP) program, we use spills observed in Canadian waters to demonstrate the investigative value of augmenting spill detections with temporally sequenced vessel and spill tracking information.
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-01-01
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation. PMID:25195855
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-08-19
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation.
Landscape-Scale Analysis of Wetland Sediment Deposition from Four Tropical Cyclone Events
Tweel, Andrew W.; Turner, R. Eugene
2012-01-01
Hurricanes Katrina, Rita, Gustav, and Ike deposited large quantities of sediment on coastal wetlands after making landfall in the northern Gulf of Mexico. We sampled sediments deposited on the wetland surface throughout the entire Louisiana and Texas depositional surfaces of Hurricanes Katrina, Rita, Gustav, and the Louisiana portion of Hurricane Ike. We used spatial interpolation to model the total amount and spatial distribution of inorganic sediment deposition from each storm. The sediment deposition on coastal wetlands was an estimated 68, 48, and 21 million metric tons from Hurricanes Katrina, Rita, and Gustav, respectively. The spatial distribution decreased in a similar manner with distance from the coast for all hurricanes, but the relationship with distance from the storm track was more variable between events. The southeast-facing Breton Sound estuary had significant storm-derived sediment deposition west of the storm track, whereas sediment deposition along the south-facing coastline occurred primarily east of the storm track. Sediment organic content, bulk density, and grain size also decreased significantly with distance from the coast, but were also more variable with respect to distance from the track. On average, eighty percent of the mineral deposition occurred within 20 km from the coast, and 58% was within 50 km of the track. These results highlight an important link between tropical cyclone events and coastal wetland sedimentation, and are useful in identifying a more complete sediment budget for coastal wetland soils. PMID:23185635
Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang
2013-01-01
To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709
Design of tracking and detecting lens system by diffractive optical method
NASA Astrophysics Data System (ADS)
Yang, Jiang; Qi, Bo; Ren, Ge; Zhou, Jianwei
2016-10-01
Many target-tracking applications require an optical system to acquire the target for tracking and identification. This paper describes a new detecting optical system that can provide automatic flying object detecting, tracking and measuring in visible band. The main feature of the detecting lens system is the combination of diffractive optics with traditional lens design by a technique was invented by Schupmann. Diffractive lens has great potential for developing the larger aperture and lightweight lens. First, the optical system scheme was described. Then the Schupmann achromatic principle with diffractive lens and corrective optics is introduced. According to the technical features and requirements of the optical imaging system for detecting and tracking, we designed a lens system with flat surface Fresnel lens and cancels the optical system chromatic aberration by another flat surface Fresnel lens with effective focal length of 1980mm, an F-Number of F/9.9 and a field of view of 2ωω = 14.2', spatial resolution of 46 lp/mm and a working wavelength range of 0.6 0.85um. At last, the system is compact and easy to fabricate and assembly, the diffuse spot size and MTF function and other analysis provide good performance.
Real-time Human Activity Recognition
NASA Astrophysics Data System (ADS)
Albukhary, N.; Mustafah, Y. M.
2017-11-01
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
NASA Astrophysics Data System (ADS)
Wang, Yao; Vijaya Kumar, B. V. K.
2017-05-01
The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
A Viola-Jones based hybrid face detection framework
NASA Astrophysics Data System (ADS)
Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau
2013-12-01
Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Peter J.; Edson, Patrick L.
2013-12-20
This project saw the completion of the design and development of a second generation, high frequency (90-120 kHz) Subsurface-Threat Detection Sonar Network (SDSN). The system was deployed, operated, and tested in Cobscook Bay, Maine near the site the Ocean Renewable Power Company TidGen™ power unit. This effort resulted in a very successful demonstration of the SDSN detection, tracking, localization, and classification capabilities in a high current, MHK environment as measured by results from the detection and tracking trials in Cobscook Bay. The new high frequency node, designed to operate outside the hearing range of a subset of marine mammals, wasmore » shown to detect and track objects of marine mammal-like target strength to ranges of approximately 500 meters. This performance range results in the SDSN system tracking objects for a significant duration - on the order of minutes - even in a tidal flow of 5-7 knots, potentially allowing time for MHK system or operator decision-making if marine mammals are present. Having demonstrated detection and tracking of synthetic targets with target strengths similar to some marine mammals, the primary hurdle to eventual automated monitoring is a dataset of actual marine mammal kinematic behavior and modifying the tracking algorithms and parameters which are currently tuned to human diver kinematics and classification.« less
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
Aerial tracking of radio-marked white-tailed tropicbirds over the Caribbean Sea
Fuller, M.R.; Obrecht, H.H.; Pennycuick, C.J.; Schaffner, F.C.; Amlaner, Charles J.
1989-01-01
We radio-marked nesting white-tailed tropicbirds at Culebra National Wildlife Refuge, Puerto Rico, and tracked them from a Cessna 182 during flights over the open sea. Locations of the birds were determined using standard aerial telemetry techniques for side-facing Yagi antennas. We used strut-mounted, 4-element Yagi antennas connected to a switchbox and scanning receiver. By recording bearing and distance from at least 1 of 3 aeronautical navigation beacons, the position of the aircraft and the bird could be estimated with an error of about 2 km. On several occasions we plotted the general heading of a bird and then relocated and tracked the same bird on the following day. Our method of aerial tracking and navigation was useful for tracking birds over the sea to at least 116 km from the breeding colony
Lu, Shengfu; Xu, Jiying; Li, Mi; Xue, Jia; Lu, Xiaofeng; Feng, Lei; Fu, Bingbing; Wang, Gang; Zhong, Ning; Hu, Bin
2017-10-01
Objective To compare the attentional bias of depressed patients and non-depressed control subjects and examine the effects of age using eye-tracking technology in a free-viewing set of tasks. Methods Patients with major depressive disorder (MDD) and non-depressed control subjects completed an eye-tracking task to assess attention of processing negative, positive and neutral facial expressions. In this cross-sectional study, the tasks were separated in two types (neutral versus happy faces and neutral versus sad faces) and assessed in two age groups ('young' [18-30 years] and 'middle-aged' [31-55 years]). Results Compared with non-depressed control subjects ( n = 75), patients with MDD ( n = 90) had a significant reduced positive attentional bias and enhanced negative attentional bias irrespective of age. The positive attentional bias in 'middle-aged' patients with MDD was significantly lower than in 'young' patients, although there was no difference between the two age groups in negative attentional bias. Conclusions These results confirm that there are emotional attentional biases in patients with MDD and that positive attentional biases are influenced by age.
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Activity of southeastern bats along sandstone cliffs used for rock climbing
Loeb, Susan C.; Jodice, Patrick G. R.
2018-01-01
Bats in the eastern U.S. are facing numerous threats and many species are in decline. Although several species of bats commonly roost in cliffs, little is known about use of cliffs for foraging and roosting. Because rock climbing is a rapidly growing sport and may cause disturbance to bats, our objectives were to examine use of cliff habitats by bats and to assess the effects of climbing on their activity. We used radio-telemetry to track small-footed bats (Myotis leibii) to day roosts, and Anabat SD2 detectors to compare bat activity between climbed and unclimbed areas of regularly climbed cliff faces, and between climbed and unclimbed cliffs. Four adult male small-footed bats were tracked to nine day roosts, all of which were in various types of crevices including five cliff face roosts (three on climbed and two on unclimbed faces). Bat activity was high along climbed cliffs and did not differ between climbed and unclimbed areas of climbed cliffs. In contrast, overall bat activity was significantly higher along climbed cliffs than unclimbed cliffs; species richness did not differ between climbed and unclimbed cliffs or areas. Lower activity along unclimbed cliffs may have been related to lower cliff heights and more clutter along these cliff faces. Due to limited access to unclimbed cliffs of comparable size to climbed cliffs, we could not thoroughly test the effects of climbing on bat foraging and roosting activity. However, the high overall use of climbed and unclimbed cliff faces for foraging and commuting that we observed suggests that cliffs may be important habitat for a number of bat species. Additional research on bats' use of cliff faces will improve our understanding of the factors that affect their use of this habitat including the impacts of climbing.
NASA Astrophysics Data System (ADS)
Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.
2018-03-01
Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.
Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots
Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko
2012-01-01
This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171
You Look Familiar: How Malaysian Chinese Recognize Faces
Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth
2012-01-01
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762
Magnetic gradiometer for underwater detection applications
NASA Astrophysics Data System (ADS)
Kumar, S.; Skvoretz, D. C.; Moeller, C. R.; Ebbert, M. J.; Perry, A. R.; Ostrom, R. K.; Tzouris, A.; Bennett, S. L.; Czipott, P. V.; Sulzberger, G.; Allen, G. I.; Bono, J.; Clem, T. R.
2006-05-01
We have designed and constructed a magnetic gradiometer for underwater mine detection, location and tracking. The United States Naval Surface Warfare Center (NSWC PC) in Panama City, FL has conducted sea tests of the system using an unmanned underwater vehicle (UUV). The Real-Time Tracking Gradiometer (RTG) measures the magnetic field gradients caused by the presence of a mine in the Earth's magnetic field. These magnetic gradients can then be used to detect and locate a target with the UUV in motion. Such a platform can also be used for other applications, including the detection and tracking of vessels and divers for homeland (e.g., port) security and the detection of underwater pipelines. Data acquired by the RTG in sea tests is presented in this paper.
Vehicle Tracking System using Nanotechnology Satellites and Tags
NASA Technical Reports Server (NTRS)
Lorenzini, Dino A.; Tubis, Chris
1995-01-01
This paper describes a joint project to design, develop, and deploy a satellite based tracking system incorporating micro-nanotechnology components. The system consists of a constellation of 'nanosats', a satellite command station and data collection sites, and a large number of low-cost electronic 'tags'. Both government and commercial applications are envisioned for the satellite based tracking system. The projected low price for the tracking service is made possible by the lightweight nanosats and inexpensive electronic tags which use high production volume single chip transceivers and microprocessor devices. The nanosat consists of a five inch aluminum cube with body mounted solar panels (GaAs solar cells) on all six faces. A UHF turnstile antenna and a simple, spring release mechanism complete the external configuration of the spacecraft.
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Microcomputer aided tracking (MCAT)
NASA Astrophysics Data System (ADS)
Mays, A. B.; Cross, D. C.; Walters, J. L.
1983-07-01
The goal of the MCAT project was to investigate the effectiveness of operator initiated tracks followed by automatic tracking. Adding this capability to a display was intended to relieve operator overload and fatigue which results when the operator is limited to grease pencil tracking. MCAT combines several microprocessors and a microcomputer-driven PPI(Plan Position Indications) with graphics capability. The operator is required to make the initial detection and MCAT then performs automatic detection and tracking in a limited area centered around the detection. This approach was chosen because it is far less costly than a full-up auto detect and track approach. MCAT is intended for use in a non-NTDS (Naval Tactical Data System) environment where operator aids are minimal at best. There are approximately 200 non-NTDS ships in today's Navy. Each of these ships has a combat information center (CIC) which includes numerous PPIs typically SPA-25s, SPA-66s, SPA-50s) and various manual means (e.g., air summary plotboards, NC-2 plotters) of producing summary plots and performing calculations (e.g., maneuvering board paper) pertinent to tracks in progress. The operator's duties are time-consuming and there are many things that could be done via computer control and graphics displays that the non-NTDS operate must now do manually. Because there is much manual information handling, accumulation of data is slow and there is a large probability of error.
Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas
2008-01-01
PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.
Lo, L Y; Cheng, M Y
2017-06-01
Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency. © 2015 International Union of Psychological Science.
NASA Astrophysics Data System (ADS)
Elbouz, Marwa; Alfalou, Ayman; Brosseau, Christian
2011-06-01
Home automation is being implemented into more and more domiciles of the elderly and disabled in order to maintain their independence and safety. For that purpose, we propose and validate a surveillance video system, which detects various posture-based events. One of the novel points of this system is to use adapted Vander-Lugt correlator (VLC) and joint-transfer correlator (JTC) techniques to make decisions on the identity of a patient and his three-dimensional (3-D) positions in order to overcome the problem of crowd environment. We propose a fuzzy logic technique to get decisions on the subject's behavior. Our system is focused on the goals of accuracy, convenience, and cost, which in addition does not require any devices attached to the subject. The system permits one to study and model subject responses to behavioral change intervention because several levels of alarm can be incorporated according different situations considered. Our algorithm performs a fast 3-D recovery of the subject's head position by locating eyes within the face image and involves a model-based prediction and optical correlation techniques to guide the tracking procedure. The object detection is based on (hue, saturation, value) color space. The system also involves an adapted fuzzy logic control algorithm to make a decision based on information given to the system. Furthermore, the principles described here are applicable to a very wide range of situations and robust enough to be implementable in ongoing experiments.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
NASA Astrophysics Data System (ADS)
Zhao, Yiqun; Wang, Zhihui
2015-12-01
The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
False star detection and isolation during star tracking based on improved chi-square tests.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Yang, Yanqiang; Su, Guohua
2017-08-01
The star sensor is a precise attitude measurement device for a spacecraft. Star tracking is the main and key working mode for a star sensor. However, during star tracking, false stars become an inevitable interference for star sensor applications, which may result in declined measurement accuracy. A false star detection and isolation algorithm in star tracking based on improved chi-square tests is proposed in this paper. Two estimations are established based on a Kalman filter and a priori information, respectively. The false star detection is operated through adopting the global state chi-square test in a Kalman filter. The false star isolation is achieved using a local state chi-square test. Semi-physical experiments under different trajectories with various false stars are designed for verification. Experiment results show that various false stars can be detected and isolated from navigation stars during star tracking, and the attitude measurement accuracy is hardly influenced by false stars. The proposed algorithm is proved to have an excellent performance in terms of speed, stability, and robustness.
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
ERIC Educational Resources Information Center
Sanders, Jackie; Munford, Robyn; Thimasarn-Anwar, Tewaporn
2016-01-01
This article draws on the findings from a mixed-methods New Zealand study of the experience of service use of 605 vulnerable young people (aged 13-17 years). Drawing on the survey data, it focuses on the factors that assisted young people to stay on-track with their education. Key findings include: being able to stay at mainstream school was the…
Detection of Spoofed MAC Addresses in 802.11 Wireless Networks
NASA Astrophysics Data System (ADS)
Tao, Kai; Li, Jing; Sampalli, Srinivas
Medium Access Control (MAC) address spoofing is considered as an important first step in a hacker's attempt to launch a variety of attacks on 802.11 wireless networks. Unfortunately, MAC address spoofing is hard to detect. Most current spoofing detection systems mainly use the sequence number (SN) tracking technique, which has drawbacks. Firstly, it may lead to an increase in the number of false positives. Secondly, such techniques cannot be used in systems with wireless cards that do not follow standard 802.11 sequence number patterns. Thirdly, attackers can forge sequence numbers, thereby causing the attacks to go undetected. We present a new architecture called WISE GUARD (Wireless Security Guard) for detection of MAC address spoofing on 802.11 wireless LANs. It integrates three detection techniques - SN tracking, Operating System (OS) fingerprinting & tracking and Received Signal Strength (RSS) fingerprinting & tracking. It also includes the fingerprinting of Access Point (AP) parameters as an extension to the OS fingerprinting for detection of AP address spoofing. We have implemented WISE GUARD on a test bed using off-the-shelf wireless devices and open source drivers. Experimental results show that the new design enhances the detection effectiveness and reduces the number of false positives in comparison with current approaches.
Still Not Enough Time in the Day: Media Specialists, Program Planning and Time Management, Part II
ERIC Educational Resources Information Center
Fitzgerald, Mary Ann; Waldrip, Andrea
2004-01-01
The ways in which one can keep the library media plan on track in the face of the realistic challenges faced by media are discussed. A system for prioritizing tasks, both planned goal-related tasks and tasks that walk in the door in varying degrees of emergency status and by need of more time management strategies could be used. [For Part I, see…
A&M. A&M building (TAN607). Camera facing east. From left to ...
A&M. A&M building (TAN-607). Camera facing east. From left to right, pool section, hot shop, cold shop, and machine shop. Biparting doors to hot shop are in open position behind shroud. Four rail tracks lead to hot shop and cold shop. Date: August 20, 1954. INEEL negative no. 11706 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
ERIC Educational Resources Information Center
Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca
2011-01-01
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…
Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology
Fernandez-Mendez, Felipe; Barcala-Furelos, Roberto; Padron-Cabo, Alexis; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio
2017-01-01
An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation. PMID:28758128
Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology.
Fernandez-Mendez, Felipe; Saez-Gallego, Nieves Maria; Barcala-Furelos, Roberto; Abelairas-Gomez, Cristian; Padron-Cabo, Alexis; Perez-Ferreiros, Alexandra; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio
2017-01-01
An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.
Boker, Steven M.; Cohn, Jeffrey F.; Theobald, Barry-John; Matthews, Iain; Brick, Timothy R.; Spies, Jeffrey R.
2009-01-01
When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements. PMID:19884143
A vision-based approach for tramway rail extraction
NASA Astrophysics Data System (ADS)
Zwemer, Matthijs H.; van de Wouw, Dennis W. J. M.; Jaspers, Egbert; Zinger, Sveta; de With, Peter H. N.
2015-03-01
The growing traffic density in cities fuels the desire for collision assessment systems on public transportation. For this application, video analysis is broadly accepted as a cornerstone. For trams, the localization of tramway tracks is an essential ingredient of such a system, in order to estimate a safety margin for crossing traffic participants. Tramway-track detection is a challenging task due to the urban environment with clutter, sharp curves and occlusions of the track. In this paper, we present a novel and generic system to detect the tramway track in advance of the tram position. The system incorporates an inverse perspective mapping and a-priori geometry knowledge of the rails to find possible track segments. The contribution of this paper involves the creation of a new track reconstruction algorithm which is based on graph theory. To this end, we define track segments as vertices in a graph, in which edges represent feasible connections. This graph is then converted to a max-cost arborescence graph, and the best path is selected according to its location and additional temporal information based on a maximum a-posteriori estimate. The proposed system clearly outperforms a railway-track detector. Furthermore, the system performance is validated on 3,600 manually annotated frames. The obtained results are promising, where straight tracks are found in more than 90% of the images and complete curves are still detected in 35% of the cases.
Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia
2013-10-01
More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
Surgical tool detection and tracking in retinal microsurgery
NASA Astrophysics Data System (ADS)
Alsheakhali, Mohamed; Yigitsoy, Mehmet; Eslami, Abouzar; Navab, Nassir
2015-03-01
Visual tracking of surgical instruments is an essential part of eye surgery, and plays an important role for the surgeons as well as it is a key component of robotics assistance during the operation time. The difficulty of detecting and tracking medical instruments in-vivo images comes from its deformable shape, changes in brightness, and the presence of the instrument shadow. This paper introduces a new approach to detect the tip of surgical tool and its width regardless of its head shape and the presence of the shadows or vessels. The approach relies on integrating structural information about the strong edges from the RGB color model, and the tool location-based information from L*a*b color model. The probabilistic Hough transform was applied to get the strongest straight lines in the RGB-images, and based on information from the L* and a*, one of these candidates lines is selected as the edge of the tool shaft. Based on that line, the tool slope, the tool centerline and the tool tip could be detected. The tracking is performed by keeping track of the last detected tool tip and the tool slope, and filtering the Hough lines within a box around the last detected tool tip based on the slope differences. Experimental results demonstrate the high accuracy achieved in term of detecting the tool tip position, the tool joint point position, and the tool centerline. The approach also meets the real time requirements.
Point Target Detection in IR Image Sequences using Spatio-Temporal Hypotheses Testing
1999-02-01
incorporate temporal as well as spatial infor- mation, they are often referred to as \\ track before detect " algorithms. The standard approach was to pose the...6, 3]. A drawback of these track - before - detect techniques is that they are very computationally intensive since the entire 3-D space must be ltered
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Online two-stage association method for robust multiple people tracking
NASA Astrophysics Data System (ADS)
Lv, Jingqin; Fang, Jiangxiong; Yang, Jie
2011-07-01
Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.
Sensor for detecting and differentiating chemical analytes
Yi, Dechang [Metuchen, NJ; Senesac, Lawrence R [Knoxville, TN; Thundat, Thomas G [Knoxville, TN
2011-07-05
A sensor for detecting and differentiating chemical analytes includes a microscale body having a first end and a second end and a surface between the ends for adsorbing a chemical analyte. The surface includes at least one conductive heating track for heating the chemical analyte and also a conductive response track, which is electrically isolated from the heating track, for producing a thermal response signal from the chemical analyte. The heating track is electrically connected with a voltage source and the response track is electrically connected with a signal recorder. The microscale body is restrained at the first end and the second end and is substantially isolated from its surroundings therebetween, thus having a bridge configuration.
Current status and prospects of nuclear physics research based on tracking techniques
NASA Astrophysics Data System (ADS)
Alekseev, V. A.; Alexandrov, A. B.; Bagulya, A. V.; Chernyavskiy, M. M.; Goncharova, L. A.; Gorbunov, S. A.; Kalinina, G. V.; Konovalova, N. S.; Okatyeva, N. M.; Pavlova, T. A.; Polukhina, N. G.; Shchedrina, T. V.; Starkov, N. I.; Tioukov, V. E.; Vladymirov, M. S.; Volkov, A. E.
2017-01-01
Results of nuclear physics research made using track detectors are briefly reviewed. Advantages and prospects of the track detection technique in particle physics, neutrino physics, astrophysics and other fields are discussed on the example of the results of the search for direct origination of tau neutrino in a muon neutrino beam within the framework of the international experiment OPERA (Oscillation Project with Emulsion-tRacking Apparatus) and works on search for superheavy nuclei in nature on base of their tracks in meteoritic olivine crystals. The spectra of superheavy elements in galactic cosmic rays are presented. Prospects of using the track detection technique in fundamental and applied research are reported.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Vegetation associated with different walking track types in the Kosciuszko alpine area, Australia.
Hill, Wendy; Pickering, Catherine Marina
2006-01-01
Tourism infrastructure such as walking tracks can have negative effects on vegetation including in mountain regions. In the alpine area around continental Australia's highest mountain, Mt Kosciuszko (2228 m), there is a range of walking tracks (paved, gravel and raised steel mesh surfaces) in addition to an extensive network of informal/non-hardened tracks. Vegetation characteristics were compared between track types on/under tracks, on the track verge, and in the adjacent native vegetation. For a raised steel mesh walkway there was no difference in vegetation under the walkway, on the verge, and 3m away. In contrast, for a non-hardened track there was 35% bare ground on the track surface but no other detectable impacts. Gravel and paved tracks had distinct verges largely comprising bare ground and exotic species. For non-hardened tracks there was an estimated 270 m2 of disturbance per km of track. For wide gravel tracks the combined area of bare ground, exotic plants and gravel was estimated as 4290 m2 per km, while for narrow gravel tracks it was estimated as 2940 m2 per km. For paved tracks there was around 2680 m2 per km of damage. In contrast, there was no detectable effect of raised steel mesh walkway on vegetation highlighting some of the benefits of this surface over other track types.
Differential emotion attribution to neutral faces of own and other races.
Hu, Chao S; Wang, Qiandong; Han, Tong; Weare, Ethan; Fu, Genyue
2017-02-01
Past research has demonstrated differential recognition of emotion on faces of different races. This paper reports the first study to explore differential emotion attribution to neutral faces of different races. Chinese and Caucasian adults viewed a series of Chinese and Caucasian neutral faces and judged their outward facial expression: neutral, positive, or negative. The results showed that both Chinese and Caucasian viewers perceived more Chinese faces than Caucasian faces as neutral. Nevertheless, Chinese viewers attributed positive emotion to Caucasian faces more than to Chinese faces, whereas Caucasian viewers attributed negative emotion to Caucasian faces more than to Chinese faces. Moreover, Chinese viewers attributed negative and neutral emotion to the faces of both races without significant difference in frequency, whereas Caucasian viewers mostly attributed neutral emotion to the faces. These differences between Chinese and Caucasian viewers may be due to differential visual experience, culture, racial stereotype, or expectation of the experiment. We also used eye tracking among the Chinese participants to explore the relationship between face-processing strategy and emotion attribution to neutral faces. The results showed that the interaction between emotion attribution and face race was significant on face-processing strategy, such as fixation proportion on eyes and saccade amplitude. Additionally, pupil size during processing Caucasian faces was larger than during processing Chinese faces.
Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
Track-monitoring from the dynamic response of an operational train
NASA Astrophysics Data System (ADS)
Lederman, George; Chen, Siheng; Garrett, James; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo
2017-03-01
We explore a data-driven approach for monitoring rail infrastructure from the dynamic response of a train in revenue-service. Presently, track inspection is performed either visually or with dedicated track geometry cars. In this study, we examine a more economical approach where track inspection is performed by analyzing vibration data collected from an operational passenger train. The high frequency with which passenger trains travel each section of track means that faults can be detected sooner than with dedicated inspection vehicles, and the large number of passes over each section of track makes a data-driven approach statistically feasible. We have deployed a test-system on a light-rail vehicle and have been collecting data for the past two years. The collected data underscores two of the main challenges that arise in train-based track monitoring: the speed of the train at a given location varies from pass to pass and the position of the train is not known precisely. In this study, we explore which feature representations of the data best characterize the state of the tracks despite these sources of uncertainty (i.e., in the spatial domain or frequency domain), and we examine how consistently change detection approaches can identify track changes from the data. We show the accuracy of these different representations, or features, and different change detection approaches on two types of track changes, track replacement and tamping (a maintenance procedure to improve track geometry), and two types of data, simulated data and operational data from our test-system. The sensing, signal processing, and data analysis we propose in the study could facilitate safer trains and more cost-efficient maintenance in the future. Moreover, the proposed approach is quite general and could be extended to other parts of the infrastructure, including bridges.
Barlow, Ingrid G; Liu, Lili; Sekulic, Angela
2009-01-01
This study compared outcomes of wheelchair seating and positioning interventions provided by telerehabilitation (n=10) and face-to-face (n=20; 10 in each of two comparison groups, one urban and one rural). Comparison clients were matched to the telerehabilitation clients in age, diagnosis, and type of seating components received. Clients and referring therapists rated their satisfaction and identified if seating intervention goals were met. Clients recorded travel expenses incurred or saved, and all therapists recorded time spent providing service. Wait times and completion times were tracked. Clients seen by telerehabilitation had similar satisfaction ratings and were as likely to have their goals met as clients seen face-to-face; telerehabilitation clients saved travel costs. Rural referring therapists who used telerehabilitation spent more time in preparation and follow-up than the other groups. Clients assessed by telerehabilitation had shorter wait times for assessment than rural face-to-face clients, but their interventions took as long to complete. PMID:25945159
NASA Technical Reports Server (NTRS)
Lovelady, R. W.; Ferguson, R. L.
1975-01-01
Self-powered sonar device may be implanted in body of fish. It transmits signal that can be detected with portable tracking gear or by automatic detection-and-tracking system. Operating life of over 4000 hours may be expected. Device itself may be used almost indefinitely.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Right wing authoritarianism is associated with race bias in face detection
Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial
2017-01-01
Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705
Facial detection using deep learning
NASA Astrophysics Data System (ADS)
Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.
2017-11-01
In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
2016-01-01
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
Real-time automatic fiducial marker tracking in low contrast cine-MV images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang
2013-01-15
Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less
Dose-equivalent neutron dosimeter
Griffith, R.V.; Hankins, D.E.; Tomasino, L.; Gomaa, M.A.M.
1981-01-07
A neutron dosimeter is disclosed which provides a single measurement indicating the amount of potential biological damage resulting from the neutron exposure of the wearer, for a wide range of neutron energies. The dosimeter includes a detecting sheet of track etch detecting material such as a carbonate plastic, for detecting higher energy neutrons, and a radiator layer contaning conversion material such as /sup 6/Li and /sup 10/B lying adjacent to the detecting sheet for converting moderate energy neutrons to alpha particles that produce tracks in the adjacent detecting sheet.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Metallurgical Examination of Failed T-158 Cast Austempered Ductile Iron (CADI) Track Shoes
1994-06-01
hardness testing, fracture toughness testing and Charpy impact testing were performed. In each case, the largest possible specimens were fabricated...However, due to geometrical restrictions, the tensile, fracture toughness and impact specimens were subsized . Tensile Testing Tensile coupons were...at 5OOoF for 4 hours. Mag. 1000x. 36 ‘_ Fracture Face A bolt holes Fracture Face C T = Tensile FT = Fracture Toughness NC =Notched Charpy Impact UN
Lidar-based wake tracking for closed-loop wind farm control
NASA Astrophysics Data System (ADS)
Raach, Steffen; Schlipf, David; Cheng, Po Wen
2016-09-01
This work presents two advancements towards closed-loop wake redirecting of a wind turbine. First, a model-based estimation approach is presented which uses a nacelle-based lidar system facing downwind to obtain information about the wake. A reduced order wake model is described which is then used in the estimation to track the wake. The tracking is demonstrated with lidar measurement data from an offshore campaign and with simulated lidar data from a SOWFA simulation. Second, a controller for closed-loop wake steering is presented. It uses the wake tracking information to set the yaw actuator of the wind turbine to redirect the wake to a desired position. Altogether, this paper aims to present the concept of closed-loop wake redirecting and gives a possible solution to it.
Efficient live face detection to counter spoof attack in face recognition systems
NASA Astrophysics Data System (ADS)
Biswas, Bikram Kumar; Alam, Mohammad S.
2015-03-01
Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
A real-time tracking system of infrared dim and small target based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun
2014-11-01
A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.
Ong, Lee-Ling S; Xinghua Zhang; Kundukad, Binu; Dauwels, Justin; Doyle, Patrick; Asada, H Harry
2016-08-01
An approach to automatically detect bacteria division with temporal models is presented. To understand how bacteria migrate and proliferate to form complex multicellular behaviours such as biofilms, it is desirable to track individual bacteria and detect cell division events. Unlike eukaryotic cells, prokaryotic cells such as bacteria lack distinctive features, causing bacteria division difficult to detect in a single image frame. Furthermore, bacteria may detach, migrate close to other bacteria and may orientate themselves at an angle to the horizontal plane. Our system trains a hidden conditional random field (HCRF) model from tracked and aligned bacteria division sequences. The HCRF model classifies a set of image frames as division or otherwise. The performance of our HCRF model is compared with a Hidden Markov Model (HMM). The results show that a HCRF classifier outperforms a HMM classifier. From 2D bright field microscopy data, it is a challenge to separate individual bacteria and associate observations to tracks. Automatic detection of sequences with bacteria division will improve tracking accuracy.
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Hengy, Sebastien; Hommes, Alexander; Kloeppel, Frank; Shoykhetbrod, Alex; Geibig, Thomas; Johannes, Winfried; Naz, Pierre; Christnacher, Frank
2017-05-01
Small unmanned aerial vehicles (UAV) flying at low altitude are becoming more and more a serious threat in civilian and military scenarios. In recent past, numerous incidents have been reported where small UAV were flying in security areas leading to serious danger to public safety or privacy. The detection and tracking of small UAV is a widely discussed topic. Especially, small UAV flying at low altitude in urban environment or near background structures and the detection of multiple UAV at the same time is challenging. Field trials were carried out to investigate the detection and tracking of multiple UAV flying at low altitude with state of the art detection technologies. Here, we present results which were achieved using a heterogeneous sensor network consisting of acoustic antennas, small frequency modulated continuous wave (FMCW) RADAR systems and optical sensors. While acoustics, RADAR and LiDAR were applied to monitor a wide azimuthal area (360°) and to simultaneously track multiple UAV, optical sensors were used for sequential identification with a very narrow field of view.
Eye Tracking and Head Movement Detection: A State-of-Art Survey
2013-01-01
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Feathered Detectives: Real-Time GPS Tracking of Scavenging Gulls Pinpoints Illegal Waste Dumping.
Navarro, Joan; Grémillet, David; Afán, Isabel; Ramírez, Francisco; Bouten, Willem; Forero, Manuela G
2016-01-01
Urban waste impacts human and environmental health, and waste management has become one of the major challenges of humanity. Concurrently with new directives due to manage this human by-product, illegal dumping has become one of the most lucrative activities of organized crime. Beyond economic fraud, illegal waste disposal strongly enhances uncontrolled dissemination of human pathogens, pollutants and invasive species. Here, we demonstrate the potential of novel real-time GPS tracking of scavenging species to detect environmental crime. Specifically, we were able to detect illegal activities at an officially closed dump, which was visited recurrently by 5 of 19 GPS-tracked yellow-legged gulls (Larus michahellis). In comparison with conventional land-based surveys, GPS tracking allows a much wider and cost-efficient spatiotemporal coverage, even of the most hazardous sites, while GPS data accessibility through the internet enables rapid intervention. Our results suggest that multi-species guilds of feathered detectives equipped with GPS and cameras could help fight illegal dumping at continental scales. We encourage further experimental studies, to infer waste detection thresholds in gulls and other scavenging species exploiting human waste dumps.
Feathered Detectives: Real-Time GPS Tracking of Scavenging Gulls Pinpoints Illegal Waste Dumping
Grémillet, David; Afán, Isabel; Ramírez, Francisco; Bouten, Willem; Forero, Manuela G.
2016-01-01
Urban waste impacts human and environmental health, and waste management has become one of the major challenges of humanity. Concurrently with new directives due to manage this human by-product, illegal dumping has become one of the most lucrative activities of organized crime. Beyond economic fraud, illegal waste disposal strongly enhances uncontrolled dissemination of human pathogens, pollutants and invasive species. Here, we demonstrate the potential of novel real-time GPS tracking of scavenging species to detect environmental crime. Specifically, we were able to detect illegal activities at an officially closed dump, which was visited recurrently by 5 of 19 GPS-tracked yellow-legged gulls (Larus michahellis). In comparison with conventional land-based surveys, GPS tracking allows a much wider and cost-efficient spatiotemporal coverage, even of the most hazardous sites, while GPS data accessibility through the internet enables rapid intervention. Our results suggest that multi-species guilds of feathered detectives equipped with GPS and cameras could help fight illegal dumping at continental scales. We encourage further experimental studies, to infer waste detection thresholds in gulls and other scavenging species exploiting human waste dumps. PMID:27448048
The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian
2010-01-01
Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.
The relationship between visual search and categorization of own- and other-age faces.
Craig, Belinda M; Lipp, Ottmar V
2018-03-13
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.
Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2009-01-01
Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.
Improving Lab Sample Management - POS/MCEARD
"Scientists face increasing challenges in managing their laboratory samples, including long-term storage of legacy samples, tracking multiple aliquots of samples for many experiments, and linking metadata to these samples. Other factors complicating sample management include the...
Interior view of main entry on south elevation, showing railroad ...
Interior view of main entry on south elevation, showing railroad tracks; camera facing south. - Mare Island Naval Shipyard, Boiler Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
Interior view of main entry on south elevation, showing railroad ...
Interior view of main entry on south elevation, showing railroad tracks; camera facing south. - Mare Island Naval Shipyard, Machine Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L
2015-03-01
In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L
2010-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.
Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.
2009-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354
NASA Astrophysics Data System (ADS)
Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu
2015-08-01
Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank
2017-05-01
An increasing number of incidents are reported where small unmanned aerial vehicles (UAV) are involved flying at low altitude. Thus UAVs are becoming more and more a serious threat in civilian and military scenarios leading to serious danger to safety or privacy issues. In this context, the detection and tracking of small UAV flying at low altitude in urban environment or near background structures is a challenge for state of the art detection technologies. In this paper, we focus on detection, tracking and identification by laser sensing technologies that are Laser Gated Viewing and scanning LiDAR. The laser reflection cross-sections (LRCS) has direct impact on the probability to detection and capability for range measurement. Here, we present methods to determine the laser reflection cross-sections by experimental and computational approaches.
Method for tracking the location of mobile agents using stand-off detection technique
Schmitt, Randal L [Tijeras, NM; Bender, Susan Fae Ann [Tijeras, NM; Rodacy, Philip J [Albuquerque, NM; Hargis, Jr., Philip J.; Johnson, Mark S [Albuquerque, NM
2006-12-26
A method for tracking the movement and position of mobile agents using light detection and ranging (LIDAR) as a stand-off optical detection technique. The positions of the agents are tracked by analyzing the time-history of a series of optical measurements made over the field of view of the optical system. This provides a (time+3-D) or (time+2-D) mapping of the location of the mobile agents. Repeated pulses of a laser beam impinge on a mobile agent, such as a bee, and are backscattered from the agent into a LIDAR detection system. Alternatively, the incident laser pulses excite fluorescence or phosphorescence from the agent, which is detected using a LIDAR system. Analysis of the spatial location of signals from the agents produced by repeated pulses generates a multidimensional map of agent location.
Target Information Processing: A Joint Decision and Estimation Approach
2012-03-29
ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important
Mochizuki, Yohei; Yoshimatsu, Hiroki; Niina, Ayaka; Teshima, Takahiro; Matsumoto, Hirotaka; Koyama, Hidekazu
2018-01-01
Case summary A 5-month-old intact female Scottish Fold cat was presented for cardiac evaluation. Careful auscultation detected a slight systolic murmur (Levine I/VI). The findings of electrocardiography, thoracic radiography, non-invasive blood pressure measurements and conventional echocardiographic studies were unremarkable. However, two-dimensional speckle tracking echocardiography revealed abnormalities in myocardial deformations, including decreased early-to-late diastolic strain rate ratios in longitudinal, radial and circumferential directions, and deteriorated segmental systolic longitudinal strain. At the follow-up examinations, the cat exhibited echocardiographic left ventricular hypertrophy and was diagnosed with hypertrophic cardiomyopathy using conventional echocardiography. Relevance and novel information This is the first report on the use of two-dimensional speckle tracking echocardiography for the early detection of myocardial dysfunction in a cat with hypertrophic cardiomyopathy; the myocardial dysfunction was detected before the development of hypertrophy. The findings from this case suggest that two-dimensional speckle tracking echocardiography can be useful for myocardial assessment when conventional echocardiographic and Doppler findings are ambiguous. PMID:29449957
Contour Tracking with a Spatio-Temporal Intensity Moment.
Demi, Marcello
2016-06-01
Standard edge detection operators such as the Laplacian of Gaussian and the gradient of Gaussian can be used to track contours in image sequences. When using edge operators, a contour, which is determined on a frame of the sequence, is simply used as a starting contour to locate the nearest contour on the subsequent frame. However, the strategy used to look for the nearest edge points may not work when tracking contours of non isolated gray level discontinuities. In these cases, strategies derived from the optical flow equation, which look for similar gray level distributions, appear to be more appropriate since these can work with a lower frame rate than that needed for strategies based on pure edge detection operators. However, an optical flow strategy tends to propagate the localization errors through the sequence and an additional edge detection procedure is essential to compensate for such a drawback. In this paper a spatio-temporal intensity moment is proposed which integrates the two basic functions of edge detection and tracking.
Moving object detection and tracking in videos through turbulent medium
NASA Astrophysics Data System (ADS)
Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.
2016-06-01
This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.
Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.
Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J
2013-01-01
Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075
3D terrain reconstruction using Chang’E-3 PCAM images
NASA Astrophysics Data System (ADS)
Chen, Wangli; Zeng, Xingguo; Zhang, Hongbo
2017-10-01
In order to improve understanding of the topography of Chang’E-3 landing site, 3D terrain models are reconstructed using PCMA images. PCAM (panoramic cameras) is a stereo camera system with a 27cm baseline on-board Yutu rover. It obtained panoramic images at four detection sites, and can achieve a resolution of 1.48mm/pixel at 10m. So the PCAM images reveal fine details of the detection region. In the method, SIFT is employed for feature description and feature matching. In addition to collinearity equations, the measure of baseline of the stereo system is also used in bundle adjustment to solve orientation parameters of all images. And then, pair-wise depth map computation is applied for dense surface reconstruction. Finally, DTM of the detection region is generated. The DTM covers an area with radius of about 20m, and centering at the location of the camera. In consequence of the design, each individual wheel of Yutu rover can leave three tracks on the surface of moon, and the width between the first and third track is 15cm, and these tracks are clear and distinguishable in images. So we chose the second detection site which is of the best ability of recognition of wheel tracks to evaluate the accuracy of the DTM. We measured the width of wheel tracks every 1.5m from the center of the detection region, and obtained 13 measures. It is noticed that the area where wheel tracks are ambiguous is avoided. Result shows that the mean value of wheel track width is 0.155m with a standard deviation of 0.007m. Generally, the closer to the center the more accurate the measure of wheel width is. This is due to the fact that the deformation of images aggravates with increase distance from the location of the camera, and this induces the decline of DTM quality in far areas. In our work, images of the four detection sites are adjusted independently, and this means that there is no tie point between different sites. So deviations between the locations of the same object measured from DTMs of adjacent detection sites may exist.