Sample records for face detection system

  1. Real-time detection with AdaBoost-svm combination in various face orientation

    NASA Astrophysics Data System (ADS)

    Fhonna, R. P.; Nasution, M. K. M.; Tulus

    2018-03-01

    Most of the research has used algorithm AdaBoost-SVM for face detection. However, to our knowledge so far there is no research has been facing detection on real-time data with various orientations using the combination of AdaBoost and Support Vector Machine (SVM). Characteristics of complex and diverse face variations and real-time data in various orientations, and with a very complex application will slow down the performance of the face detection system this becomes a challenge in this research. Face orientation performed on the detection system, that is 900, 450, 00, -450, and -900. This combination method is expected to be an effective and efficient solution in various face orientations. The results showed that the highest average detection rate is on the face detection oriented 00 and the lowest detection rate is in the face orientation 900.

  2. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  3. Greater sensitivity of the cortical face processing system to perceptually-equated face detection

    PubMed Central

    Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.

    2015-01-01

    Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952

  4. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  5. Face pose tracking using the four-point algorithm

    NASA Astrophysics Data System (ADS)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  6. Face liveness detection for face recognition based on cardiac features of skin color image

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  7. The processing of social stimuli in early infancy: from faces to biological motion perception.

    PubMed

    Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara

    2011-01-01

    There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  9. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  10. A Smart Spoofing Face Detector by Display Features Analysis.

    PubMed

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  11. Automated night/day standoff detection, tracking, and identification of personnel for installation protection

    NASA Astrophysics Data System (ADS)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert

    2013-06-01

    The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.

  12. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  13. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  14. Automated facial attendance logger for students

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Kshitish, S.; Kishore, M. R.

    2017-11-01

    From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.

  15. A Fuzzy Aproach For Facial Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Gîlcă, Gheorghe; Bîzdoacă, Nicu-George

    2015-09-01

    This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.

  16. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  17. Face Liveness Detection Using Defocus

    PubMed Central

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2015-01-01

    In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594

  18. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  19. Live face detection based on the analysis of Fourier spectra

    NASA Astrophysics Data System (ADS)

    Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.

    2004-08-01

    Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.

  20. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  1. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  2. Scrambling for anonymous visual communications

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ebrahimi, Touradj

    2005-08-01

    In this paper, we present a system for anonymous visual communications. Target application is an anonymous video chat. The system is identifying faces in the video sequence by means of face detection or skin detection. The corresponding regions are subsequently scrambled. We investigate several approaches for scrambling, either in the image-domain or in the transform-domain. Experiment results show the effectiveness of the proposed system.

  3. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  4. Interactive display system having a matrix optical detector

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2007-01-23

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. An image beam is projected across the inlet face laterally and transversely for display on the outlet face. An optical detector including a matrix of detector elements is optically aligned with the inlet face for detecting a corresponding lateral and transverse position of an inbound light spot on the outlet face.

  5. Real-time camera-based face detection using a modified LAMSTAR neural network system

    NASA Astrophysics Data System (ADS)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  6. Face liveness detection using shearlet-based feature descriptors

    NASA Astrophysics Data System (ADS)

    Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang

    2016-07-01

    Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.

  7. The safety helmet detection technology and its application to the surveillance system.

    PubMed

    Wen, Che-Yen

    2004-07-01

    The Automatic Teller Machine (ATM) plays an important role in the modem economy. It provides a fast and convenient way to process transactions between banks and their customers. Unfortunately, it also provides a convenient way for criminals to get illegal money or use stolen ATM cards to extract money from their victims' accounts. For safety reasons, each ATM has a surveillance system to record customer's face information. However, when criminals use an ATM to withdraw money illegally, they usually hide their faces with something (in Taiwan, criminals usually use safety helmets to block their faces) to avoid the surveillance system recording their face information, which decreases the efficiency of the surveillance system. In this paper, we propose a circle/circular arc detection method based upon the modified Hough transform, and apply it to the detection of safety helmets for the surveillance system of ATMs. Since the safety helmet location will be within the set of the obtainable circles/circular arcs (if any exist), we use geometric features to verify if any safety helmet exists in the set. The proposed method can be used to help the surveillance systems record a customer's face information more precisely. If customers wear safety helmets to block their faces, the system can send a message to remind them to take off their helmets. Besides this, the method can be applied to the surveillance systems of banks by providing an early warning safeguard when any "customer" or "intruder" uses a safety helmet to avoid his/her face information from being recorded by the surveillance system. This will make the surveillance system more useful. Real images are used to analyze the performance of the proposed method.

  8. Sunglass detection method for automation of video surveillance system

    NASA Astrophysics Data System (ADS)

    Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad

    2018-04-01

    Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.

  9. Automatic Fatigue Detection of Drivers through Yawning Analysis

    NASA Astrophysics Data System (ADS)

    Azim, Tayyaba; Jaffar, M. Arfan; Ramzan, M.; Mirza, Anwar M.

    This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The focus of the paper is on how to detect yawning which is an important cue for determining driver's fatigue. Initially, the face is located through Viola-Jones face detection method in a video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. The degree of mouth openness is extracted on the basis of mouth features, to determine driver's yawning state. If the yawning state of the driver persists for several consecutive frames, the system concludes that the driver is non-vigilant due to fatigue and is thus warned through an alarm. The system reinitializes when occlusion or misdetection occurs. Experiments were carried out using real data, recorded in day and night lighting conditions, and with users belonging to different race and gender.

  10. Beauty hinders attention switch in change detection: the role of facial attractiveness and distinctiveness.

    PubMed

    Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo

    2012-01-01

    Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.

  11. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  12. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  13. Face recognition system for set-top box-based intelligent TV.

    PubMed

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-11-18

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.

  14. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  15. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  16. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  17. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  18. Multivoxel patterns in face-sensitive temporal regions reveal an encoding schema based on detecting life in a face.

    PubMed

    Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia

    2013-10-01

    More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.

  19. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  20. Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees

    NASA Astrophysics Data System (ADS)

    Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.

    2017-05-01

    Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.

  1. Reducing impaired-driving recidivism using advanced vehicle-based alcohol detection systems : a report to Congress

    DOT National Transportation Integrated Search

    2007-12-01

    Vehicle-based alcohol detection systems use technologies designed to detect the presence of alcohol in a driver. Technology suitable for use in all vehicles that will detect an impaired driver faces many challenges including public acceptability, pas...

  2. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  3. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  4. Gender classification system in uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  5. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  6. Hole Feature on Conical Face Recognition for Turning Part Model

    NASA Astrophysics Data System (ADS)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  7. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  8. Non-intrusive head movement analysis of videotaped seizures of epileptic origin.

    PubMed

    Mandal, Bappaditya; Eng, How-Lung; Lu, Haiping; Chan, Derrick W S; Ng, Yen-Ling

    2012-01-01

    In this work we propose a non-intrusive video analytic system for patient's body parts movement analysis in Epilepsy Monitoring Unit. The system utilizes skin color modeling, head/face pose template matching and face detection to analyze and quantify the head movements. Epileptic patients' heads are analyzed holistically to infer seizure and normal random movements. The patient does not require to wear any special clothing, markers or sensors, hence it is totally non-intrusive. The user initializes the person-specific skin color and selects few face/head poses in the initial few frames. The system then tracks the head/face and extracts spatio-temporal features. Support vector machines are then used on these features to classify seizure-like movements from normal random movements. Experiments are performed on numerous long hour video sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.

  9. Facial detection using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.

    2017-11-01

    In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.

  10. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  11. Detecting Visually Observable Disease Symptoms from Faces.

    PubMed

    Wang, Kuan; Luo, Jiebo

    2016-12-01

    Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.

  12. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  13. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  14. A causal relationship between face-patch activity and face-detection behavior.

    PubMed

    Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A

    2017-04-04

    The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.

  15. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  16. Observed touch on a non-human face is not remapped onto the human observer's own face.

    PubMed

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.

  17. Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face

    PubMed Central

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781

  18. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  19. Adaptive Integration and Optimization of Automated and Neural Processing Systems - Establishing Neural and Behavioral Benchmarks of Optimized Performance

    DTIC Science & Technology

    2012-07-01

    detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS

  20. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  1. Standoff imaging of a masked human face using a 670 GHz high resolution radar

    NASA Astrophysics Data System (ADS)

    Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.

    2011-11-01

    This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.

  2. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  3. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  4. The wide window of face detection.

    PubMed

    Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul

    2010-08-20

    Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.

  5. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  6. Helical Face Gear Development Under the Enhanced Rotorcraft Drive System Program

    NASA Technical Reports Server (NTRS)

    Heath, Gregory F.; Slaughter, Stephen C.; Fisher, David J.; Lewicki, David G.; Fetty, Jason

    2011-01-01

    U.S. Army goals for the Enhanced Rotorcraft Drive System Program are to achieve a 40 percent increase in horsepower to weight ratio, a 15 dB reduction in drive system generated noise, 30 percent reduction in drive system operating, support, and acquisition cost, and 75 percent automatic detection of critical mechanical component failures. Boeing s technology transition goals are that the operational endurance level of the helical face gearing and related split-torque designs be validated to a TRL 6, and that analytical and manufacturing tools be validated. Helical face gear technology is being developed in this project to augment, and transition into, a Boeing AH-64 Block III split-torque face gear main transmission stage, to yield increased power density and reduced noise. To date, helical face gear grinding development on Northstar s new face gear grinding machine and pattern-development tests at the NASA Glenn/U.S. Army Research Laboratory have been completed and are described.

  7. The shape of the face template: geometric distortions of faces and their detection in natural scenes.

    PubMed

    Pongakkasira, Kaewmart; Bindemann, Markus

    2015-04-01

    Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Face detection and eyeglasses detection for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2012-01-01

    Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.

  9. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  10. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  11. Face detection assisted auto exposure: supporting evidence from a psychophysical study

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani

    2010-01-01

    Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.

  12. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  13. Familiarity facilitates feature-based face processing.

    PubMed

    Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida

    2017-01-01

    Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.

  14. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  15. Pick on someone your own size: the detection of threatening facial expressions posed by both child and adult models.

    PubMed

    LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat

    2014-02-01

    For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Interactive display system having a digital micromirror imaging device

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin

    2006-04-11

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.

  17. Energy conservation using face detection

    NASA Astrophysics Data System (ADS)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  18. Automatically Log Off Upon Disappearance of Facial Image

    DTIC Science & Technology

    2005-03-01

    log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only

  19. Mapping multisensory parietal face and body areas in humans.

    PubMed

    Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I

    2012-10-30

    Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.

  20. Efficient search for a face by chimpanzees (Pan troglodytes).

    PubMed

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  1. Efficient search for a face by chimpanzees (Pan troglodytes)

    PubMed Central

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944

  2. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  3. Efficient human face detection in infancy.

    PubMed

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  4. Driver face tracking using semantics-based feature of eyes on single FPGA

    NASA Astrophysics Data System (ADS)

    Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming

    2017-06-01

    Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.

  5. Fraudulent ID using face morphs: Experiments on human and automatic recognition

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security. PMID:28328928

  6. Fraudulent ID using face morphs: Experiments on human and automatic recognition.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.

  7. Appearance-based multimodal human tracking and identification for healthcare in the digital home.

    PubMed

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-08-05

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  8. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    PubMed Central

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-01-01

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207

  9. Searching for differences in race: is there evidence for preferential detection of other-race faces?

    PubMed

    Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka

    2009-06-01

    Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.

  10. Spalax™ new generation: A sensitive and selective noble gas system for nuclear explosion monitoring.

    PubMed

    Le Petit, G; Cagniant, A; Gross, P; Douysset, G; Topin, S; Fontaine, J P; Taffary, T; Moulin, C

    2015-09-01

    In the context of the verification regime of the Comprehensive nuclear Test ban Treaty (CTBT), CEA is developing a new generation (NG) of SPALAX™ system for atmospheric radioxenon monitoring. These systems are able to extract more than 6cm(3) of pure xenon from air samples each 12h and to measure the four relevant xenon radioactive isotopes using a high resolution detection system operating in electron-photon coincidence mode. This paper presents the performances of the SPALAX™ NG prototype in operation at Bruyères-le-Châtel CEA centre, integrating the most recent CEA developments. It especially focuses on an innovative detection system made up of a gas cell equipped with two face-to-face silicon detectors associated to one or two germanium detectors. Minimum Detectable activity Concentrations (MDCs) of environmental samples were calculated to be approximately 0.1 mBq/m(3) for the isotopes (131m)Xe, (133m)Xe, (133)Xe and 0.4 mBq/m(3) for (135)Xe (single germanium configuration). The detection system might be used to simultaneously measure particulate and noble gas samples from the CTBT International Monitoring System (IMS). That possibility could lead to new capacities for particulate measurements by allowing electron-photon coincidence detection of certain fission products. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Results from Evaluation of Three Commercial Off the shelf Face Recognition Systems on Chokepoint Dataset

    DTIC Science & Technology

    2014-09-01

    curves. Level 2 or subject-based analysis describes the performance of the system using the-so-called “Doddington’s Zoo ” categorization of individuals...Doddington’s Zoo ” categorization of individuals, which detects whether an individual belongs to an easier or a harder classes of people that the system is able...Marcialis, and F. Roli, “An experimental analysis of the relationship between biometric template update and the doddingtons zoo : A case study in face

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, Luigi Gentile; Brackney, Larry

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less

  13. From face processing to face recognition: Comparing three different processing levels.

    PubMed

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  15. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  16. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  17. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  18. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  19. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  20. High-emulation mask recognition with high-resolution hyperspectral video capture system

    NASA Astrophysics Data System (ADS)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  1. Novel face-detection method under various environments

    NASA Astrophysics Data System (ADS)

    Jing, Min-Quan; Chen, Ling-Hwei

    2009-06-01

    We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.

  2. Expectations about person identity modulate the face-sensitive N170.

    PubMed

    Johnston, Patrick; Overell, Anne; Kaufman, Jordy; Robinson, Jonathan; Young, Andrew W

    2016-12-01

    Identifying familiar faces is a fundamentally important aspect of social perception that requires the ability to assign very different (ambient) images of a face to a common identity. The current consensus is that the brain processes face identity at approximately 250-300 msec following stimulus onset, as indexed by the N250 event related potential. However, using two experiments we show compelling evidence that where experimental paradigms induce expectations about person identity, changes in famous face identity are in fact detected at an earlier latency corresponding to the face-sensitive N170. In Experiment 1, using a rapid periodic stimulation paradigm presenting highly variable ambient images, we demonstrate robust effects of low frequency, periodic face-identity changes in N170 amplitude. In Experiment 2, we added infrequent aperiodic identity changes to show that the N170 was larger to both infrequent periodic and infrequent aperiodic identity changes than to high frequency identities. Our use of ambient stimulus images makes it unlikely that these effects are due to adaptation of low-level stimulus features. In line with current ideas about predictive coding, we therefore suggest that when expectations about the identity of a face exist, the visual system is capable of detecting identity mismatches at a latency consistent with the N170. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Typical and atypical neurodevelopment for face specialization: An fMRI study

    PubMed Central

    Joseph, Jane E.; Zhu, Xun; Gundran, Andrew; Davies, Faraday; Clark, Jonathan D.; Ruble, Lisa; Glaser, Paul; Bhatt, Ramesh S.

    2014-01-01

    Individuals with Autism Spectrum Disorder (ASD) and their relatives process faces differently from typically developed (TD) individuals. In an fMRI face-viewing task, TD and undiagnosed sibling (SIB) children (5–18 years) showed face specialization in the right amygdala and ventromedial prefrontal cortex (vmPFC), with left fusiform and right amygdala face specialization increasing with age in TD subjects. SIBs showed extensive antero-medial temporal lobe activation for faces that was not present in any other group, suggesting a potential compensatory mechanism. In ASD, face specialization was minimal but increased with age in the right fusiform and decreased with age in the left amygdala, suggesting atypical development of a frontal-amygdala-fusiform system which is strongly linked to detecting salience and processing facial information. PMID:25479816

  4. Influence of quality of images recorded in far infrared on pattern recognition based on neural networks and Eigenfaces algorithm

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina

    2003-11-01

    This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.

  5. Webcam mouse using face and eye tracking in various illumination environments.

    PubMed

    Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng

    2005-01-01

    Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.

  6. The NMDA antagonist ketamine and the 5-HT agonist psilocybin produce dissociable effects on structural encoding of emotional face expressions.

    PubMed

    Schmidt, André; Kometer, Michael; Bachmann, Rosilla; Seifritz, Erich; Vollenweider, Franz

    2013-01-01

    Both glutamate and serotonin (5-HT) play a key role in the pathophysiology of emotional biases. Recent studies indicate that the glutamate N-methyl-D-aspartate (NMDA) receptor antagonist ketamine and the 5-HT receptor agonist psilocybin are implicated in emotion processing. However, as yet, no study has systematically compared their contribution to emotional biases. This study used event-related potentials (ERPs) and signal detection theory to compare the effects of the NMDA (via S-ketamine) and 5-HT (via psilocybin) receptor system on non-conscious or conscious emotional face processing biases. S-ketamine or psilocybin was administrated to two groups of healthy subjects in a double-blind within-subject placebo-controlled design. We behaviorally assessed objective thresholds for non-conscious discrimination in all drug conditions. Electrophysiological responses to fearful, happy, and neutral faces were subsequently recorded with the face-specific P100 and N170 ERP. Both S-ketamine and psilocybin impaired the encoding of fearful faces as expressed by a reduced N170 over parieto-occipital brain regions. In contrast, while S-ketamine also impaired the encoding of happy facial expressions, psilocybin had no effect on the N170 in response to happy faces. This study demonstrates that the NMDA and 5-HT receptor systems differentially contribute to the structural encoding of emotional face expressions as expressed by the N170. These findings suggest that the assessment of early visual evoked responses might allow detecting pharmacologically induced changes in emotional processing biases and thus provides a framework to study the pathophysiology of dysfunctional emotional biases.

  7. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  8. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  9. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  10. An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera

    NASA Astrophysics Data System (ADS)

    Kumar, K. S. Chidanand; Bhowmick, Brojeshwar

    A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.

  11. Design of DroDeASys (Drowsy Detection and Alarming System)

    NASA Astrophysics Data System (ADS)

    Juvale, Hrishikesh B.; Mahajan, Anant S.; Bhagwat, Ashwin A.; Badiger, Vishal T.; Bhutkar, Ganesh D.; Dhabe, Priyadarshan S.; Dhore, Manikrao L.

    The paper discusses the Drowsy Detection & Alarming System that has been developed, using a non-intrusive approach. The system is basically developed to detect drivers dozing at the wheel at night time driving. The system uses a small infra-red night vision camera that points directly towards the driver`s face and monitors the driver`s eyes in order to detect fatigue. In such a case when fatigue is detected, a warning signal is issued to alert the driver. This paper discusses the algorithms that have been used to detect drowsiness. The decision whether the driver is dozing or not is taken depending on whether the eyes are open for a specific number of frames. If the eyes are found to be closed for a certain number of consecutive frames then the driver is alerted with an alarm.

  12. Bridging the gap between real-life data and simulated data by providing a highly realistic fall dataset for evaluating camera-based fall detection algorithms.

    PubMed

    Baldewijns, Greet; Debard, Glen; Mertes, Gert; Vanrumste, Bart; Croonenborghs, Tom

    2016-03-01

    Fall incidents are an important health hazard for older adults. Automatic fall detection systems can reduce the consequences of a fall incident by assuring that timely aid is given. The development of these systems is therefore getting a lot of research attention. Real-life data which can help evaluate the results of this research is however sparse. Moreover, research groups that have this type of data are not at liberty to share it. Most research groups thus use simulated datasets. These simulation datasets, however, often do not incorporate the challenges the fall detection system will face when implemented in real-life. In this Letter, a more realistic simulation dataset is presented to fill this gap between real-life data and currently available datasets. It was recorded while re-enacting real-life falls recorded during previous studies. It incorporates the challenges faced by fall detection algorithms in real life. A fall detection algorithm from Debard et al. was evaluated on this dataset. This evaluation showed that the dataset possesses extra challenges compared with other publicly available datasets. In this Letter, the dataset is discussed as well as the results of this preliminary evaluation of the fall detection algorithm. The dataset can be downloaded from www.kuleuven.be/advise/datasets.

  13. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T [Tracy, CA; Segelke, Brent [San Ramon, CA; Rupp, Bernard [Livermore, CA; Toppani, Dominique [Fontainebleau, FR

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  14. Effects of an aft facing step on the surface of a laminar flow glider wing

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Saiki, Neal

    1993-01-01

    A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.

  15. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  16. Toward automated face detection in thermal and polarimetric thermal imagery

    NASA Astrophysics Data System (ADS)

    Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.

    2016-05-01

    Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.

  17. Detection system for concentration quantization of colloidal-gold test strips based on embedded and image technology

    USDA-ARS?s Scientific Manuscript database

    Facing the increasing food safety issues, Chinese government has been carrying out compulsory tests on food to meet the requirements of domestic and foreign markets. Colloidal-gold test strips using the colorimetric principle are widely used for rapid qualitative detection of harmful residues in fo...

  18. System to Detect Racial-Based Bullying through Gamification.

    PubMed

    Álvarez-Bermejo, José A; Belmonte-Ureña, Luis J; Martos-Martínez, Africa; Barragán-Martín, Ana B; Del Mar Simón-Marquez, María

    2016-01-01

    Prevention and detection of bullying due to racial stigma was studied in school contexts using a system designed following "gamification" principles and integrating less usual elements, such as social interaction, augmented reality and cell phones in educational scenarios. "Grounded Theory" and "User Centered Design" were employed to explore coexistence inside and outside the classroom in terms of preferences and distrust in several areas of action and social frameworks of activity, and to direct the development of a cell phone app for early detection of school bullying scenarios. One hundred and fifty-one interviews were given at five schools selected for their high multiracial percentage and conflict. The most outstanding results were structural, that is the distribution of the classroom group by type of activity and subject being dealt with. Furthermore, in groups over 12 years of age, the relational structures in the classroom in the digital settings in which they participated with their cell phones did not reoccur, because face-to-face and virtual interaction between students with the supervision and involvement of the teacher combined to detect bullying caused by racial discrimination.

  19. System to Detect Racial-Based Bullying through Gamification

    PubMed Central

    Álvarez-Bermejo, José A.; Belmonte-Ureña, Luis J.; Martos-Martínez, Africa; Barragán-Martín, Ana B.; del Mar Simón-Marquez, María

    2016-01-01

    Prevention and detection of bullying due to racial stigma was studied in school contexts using a system designed following “gamification” principles and integrating less usual elements, such as social interaction, augmented reality and cell phones in educational scenarios. “Grounded Theory” and “User Centered Design” were employed to explore coexistence inside and outside the classroom in terms of preferences and distrust in several areas of action and social frameworks of activity, and to direct the development of a cell phone app for early detection of school bullying scenarios. One hundred and fifty-one interviews were given at five schools selected for their high multiracial percentage and conflict. The most outstanding results were structural, that is the distribution of the classroom group by type of activity and subject being dealt with. Furthermore, in groups over 12 years of age, the relational structures in the classroom in the digital settings in which they participated with their cell phones did not reoccur, because face-to-face and virtual interaction between students with the supervision and involvement of the teacher combined to detect bullying caused by racial discrimination. PMID:27933006

  20. Rapid prototyping of SoC-based real-time vision system: application to image preprocessing and face detection

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.

  1. The Effect of Early Visual Deprivation on the Development of Face Detection

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne

    2013-01-01

    The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…

  2. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.

  3. Facial expression system on video using widrow hoff

    NASA Astrophysics Data System (ADS)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  4. Occurrence detection and selection procedures in healthcare facilities: a comparison across Canada and Brazil.

    PubMed

    Morita, Plinio P; Burns, Catherine M

    2011-01-01

    Healthcare institutions face high levels of risk on a daily basis. Efforts have been made to address these risks and turn this complex environment into a safer environment for patients, staff, and visitors. However, healthcare institutions need more advanced risk management tools to achieve the safety levels currently seen in other industries. One of these potential tools is occurrence investigation systems. In order to be investigated, occurrences must be detected and selected for investigation, since not all institutions have enough resources to investigate all occurrences. A survey was conducted in healthcare institutions in Canada and Brazil to evaluate currently used risk management tools, the difficulties faced, and the possibilities for improvement. The findings include detectability difficulties, lack of resources, lack of support, and insufficient staff involvement.

  5. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals.

    PubMed

    Hiramatsu, Chihiro; Melin, Amanda D; Allen, William L; Dubuc, Constance; Higham, James P

    2017-06-14

    Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. © 2017 The Authors.

  6. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals

    PubMed Central

    Higham, James P.

    2017-01-01

    Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. PMID:28615496

  7. Framework for objective evaluation of privacy filters

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Melle, Andrea; Dugelay, Jean-Luc; Ebrahimi, Touradj

    2013-09-01

    Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets.

  8. Method and apparatus for monitoring the flow of mercury in a system

    DOEpatents

    Grossman, Mark W.

    1987-01-01

    An apparatus and method for monitoring the flow of mercury in a system. The equipment enables the entrainment of the mercury in a carrier gas e.g., an inert gas, which passes as mercury vapor between a pair of optically transparent windows. The attenuation of the emission is indicative of the quantity of mercury (and its isotopes) in the system. A 253.7 nm light is shone through one of the windows and the unabsorbed light is detected through the other window. The absorption of the 253.7 nm light is thereby measured whereby the quantity of mercury passing between the windows can be determined. The apparatus includes an in-line sensor for measuring the quantity of mercury. It includes a conduit together with a pair of apertures disposed in a face to face relationship and arranged on opposite sides of the conduit. A pair of optically transparent windows are disposed upon a pair of viewing tubes. A portion of each of the tubes is disposed inside of the conduit and within each of the apertures. The two windows are disposed in a face to face relationship on the ends of the viewing tubes and the entire assembly is hermetically sealed from the atmosphere whereby when 253.7 nm ultraviolet light is shone through one of the windows and detected through the other, the quantity of mercury which is passing by can be continuously monitored due to absorption which is indicated by attenuation of the amplitude of the observed emission.

  9. Effective connectivities of cortical regions for top-down face processing: A Dynamic Causal Modeling study

    PubMed Central

    Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang

    2013-01-01

    To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709

  10. A Viola-Jones based hybrid face detection framework

    NASA Astrophysics Data System (ADS)

    Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau

    2013-12-01

    Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.

  11. Experimental comparisons of face-to-face and anonymous real-time team competition in a networked gaming learning environment.

    PubMed

    Yu, Fu-Yun; Han, Chialing; Chan, Tak-Wai

    2008-08-01

    This study investigates the impact of anonymous, computerized, synchronized team competition on students' motivation, satisfaction, and interpersonal relationships. Sixty-eight fourth-graders participated in this study. A synchronous gaming learning system was developed to have dyads compete against each other in answering multiple-choice questions set in accordance with the school curriculum in two conditions (face-to-face and anonymous). The results showed that students who were exposed to the anonymous team competition condition responded significantly more positively than those in the face-to-face condition in terms of motivation and satisfaction at the 0.050 and 0.056 levels respectively. Although further studies regarding the effects of anonymous interaction in a networked gaming learning environment are imperative, the positive effects detected in this preliminary study indicate that anonymity is a viable feature for mitigating the negative effects that competition may inflict on motivation and satisfaction as reported in traditional face-to-face environments.

  12. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  13. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  14. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  15. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  16. Face Detection Technique as Interactive Audio/Video Controller for a Mother-Tongue-Based Instructional Material

    NASA Astrophysics Data System (ADS)

    Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.

    2018-03-01

    Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.

  17. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  18. A quick eye to anger: An investigation of a differential effect of facial features in detecting angry and happy expressions.

    PubMed

    Lo, L Y; Cheng, M Y

    2017-06-01

    Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency. © 2015 International Union of Psychological Science.

  19. Effects of emotional and non-emotional cues on visual search in neglect patients: evidence for distinct sources of attentional guidance.

    PubMed

    Lucas, Nadia; Vuilleumier, Patrik

    2008-04-01

    In normal observers, visual search is facilitated for targets with salient attributes. We compared how two different types of cue (expression and colour) may influence search for face targets, in healthy subjects (n=27) and right brain-damaged patients with left spatial neglect (n=13). The target faces were defined by their identity (singleton among a crowd of neutral faces) but could either be neutral (like other faces), or have a different emotional expression (fearful or happy), or a different colour (red-tinted). Healthy subjects were the fastest for detecting the colour-cued targets, but also showed a significant facilitation for emotionally cued targets, relative to neutral faces differing from other distracter faces by identity only. Healthy subjects were also faster overall for target faces located on the left, as compared to the right side of the display. In contrast, neglect patients were slower to detect targets on the left (contralesional) relative to the right (ipsilesional) side. However, they showed the same pattern of cueing effects as healthy subjects on both sides of space; while their best performance was also found for faces cued by colour, they showed a significant advantage for faces cued by expression, relative to the neutral condition. These results indicate that despite impaired attention towards the left hemispace, neglect patients may still show an intact influence of both low-level colour cues and emotional expression cues on attention, suggesting that neural mechanisms responsible for these effects are partly separate from fronto-parietal brain systems controlling spatial attention during search.

  20. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  1. Sensitive periods for the functional specialization of the neural system for human face processing.

    PubMed

    Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide

    2013-10-15

    The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.

  2. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  3. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  4. The Face in the Crowd Effect Unconfounded: Happy Faces, Not Angry Faces, Are More Efficiently Detected in Single- and Multiple-Target Visual Search Tasks

    ERIC Educational Resources Information Center

    Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca

    2011-01-01

    Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…

  5. Drowsy driver mobile application: Development of a novel scleral-area detection method.

    PubMed

    Mohammad, Faisal; Mahadas, Kausalendra; Hung, George K

    2017-10-01

    A reliable and practical app for mobile devices was developed to detect driver drowsiness. It consisted of two main components: a Haar cascade classifier, provided by a computer vision framework called OpenCV, for face/eye detection; and a dedicated JAVA software code for image processing that was applied over a masked region circumscribing the eye. A binary threshold was performed over the masked region to provide a quantitative measure of the number of white pixels in the sclera, which represented the state of eye opening. A continuously low white-pixel count would indicate drowsiness, thereby triggering an alarm to alert the driver. This system was successfully implemented on: (1) a static face image, (2) two subjects under laboratory conditions, and (3) a subject in a vehicle environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  7. High precision, fast ultrasonic thermometer based on measurement of the speed of sound in air

    NASA Astrophysics Data System (ADS)

    Huang, K. N.; Huang, C. F.; Li, Y. C.; Young, M. S.

    2002-11-01

    This study presents a microcomputer-based ultrasonic system which measures air temperature by detecting variations in the speed of sound in the air. Changes in the speed of sound are detected by phase shift variations of a 40 kHz continuous ultrasonic wave. In a test embodiment, two 40 kHz ultrasonic transducers are set face to face at a constant distance. Phase angle differences between transmitted and received signals are determined by a FPGA digital phase detector and then analyzed in an 89C51 single-chip microcomputer. Temperature is calculated and then sent to a LCD display and, optionally, to a PC. Accuracy of measurement is within 0.05 degC at an inter-transducer distance of 10 cm. Temperature variations are displayed within 10 ms. The main advantages of the proposed system are high resolution, rapid temperature measurement, noncontact measurement and easy implementation.

  8. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  9. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  10. Emergency face-mask removal effectiveness: a comparison of traditional and nontraditional football helmet face-mask attachment systems.

    PubMed

    Swartz, Erik E; Belmore, Keith; Decoster, Laura C; Armstrong, Charles W

    2010-01-01

    Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Controlled laboratory study. Applied biomechanics laboratory. Twenty-five certified athletic trainers. The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α  =  .05) with follow-up comparisons to test for differences. Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F(4,20)  =  48.87, P  =  .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F(4,20)  =  16.25, P  =  .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected in RPE during face-mask removal (F(4,20)  =  43.20, P  =  .001), with participants reporting average perceived difficulty ranging from 1.44 ± 1.19 for QR to 3.68 ± 1.70 for TradAlt. The QR and Trad trials resulted in superior results. When trials required cutting loop straps, results deteriorated.

  11. Brain Network Activity During Face Perception: The Impact of Perceptual Familiarity and Individual Differences in Childhood Experience.

    PubMed

    Cloutier, Jasmin; Li, Tianyi; Mišic, Bratislav; Correll, Joshua; Berman, Marc G

    2017-09-01

    An extended distributed network of brain regions supports face perception. Face familiarity influences activity in brain regions involved in this network, but the impact of perceptual familiarity on this network has never been directly assessed with the use of partial least squares analysis. In the present work, we use this multivariate statistical analysis to examine how face-processing systems are differentially recruited by characteristics of the targets (i.e. perceptual familiarity and race) and of the perceivers (i.e. childhood interracial contact). Novel faces were found to preferentially recruit a large distributed face-processing network compared with perceptually familiar faces. Additionally, increased interracial contact during childhood led to decreased recruitment of distributed brain networks previously implicated in face perception, salience detection, and social cognition. Current results provide a novel perspective on the impact of cross-race exposure, suggesting that interracial contact early in life may dramatically shape the neural substrates of face perception generally. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  13. Presenting a model for dynamic facial expression changes in detecting drivers' drowsiness.

    PubMed

    Karchani, Mohsen; Mazloumi, Adel; Saraji, Gebraeil Nasl; Gharagozlou, Faramarz; Nahvi, Ali; Haghighi, Khosro Sadeghniiat; Abadi, Bahador Makki; Foroshani, Abbas Rahimi

    2015-01-01

    Drowsiness while driving is a major cause of accidents. A driver fatigue detection system that is designed to sound an alarm, when appropriate, can prevent many accidents that sometime leads to the loss of life and property. In this paper, we classify drowsiness detection sensors and their strong and weak points. A compound model is proposed that uses image processing techniques to study the dynamic changes of the face to recognize drowsiness during driving.

  14. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  15. Right wing authoritarianism is associated with race bias in face detection

    PubMed Central

    Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial

    2017-01-01

    Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705

  16. Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.

    PubMed

    Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y

    2016-01-01

    Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Directional templates for real-time detection of coronal axis rotated faces

    NASA Astrophysics Data System (ADS)

    Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio

    2004-10-01

    Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.

  18. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  19. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2011-04-19

    An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  20. Method and apparatus for monitoring the flow of mercury in a system

    DOEpatents

    Grossman, M.W.

    1987-12-15

    An apparatus and method for monitoring the flow of mercury in a system are disclosed. The equipment enables the entrainment of the mercury in a carrier gas e.g., an inert gas, which passes as mercury vapor between a pair of optically transparent windows. The attenuation of the emission is indicative of the quantity of mercury (and its isotopes) in the system. A 253.7 nm light is shone through one of the windows and the unabsorbed light is detected through the other window. The absorption of the 253.7 nm light is thereby measured whereby the quantity of mercury passing between the windows can be determined. The apparatus includes an in-line sensor for measuring the quantity of mercury. It includes a conduit together with a pair of apertures disposed in a face to face relationship and arranged on opposite sides of the conduit. A pair of optically transparent windows are disposed upon a pair of viewing tubes. A portion of each of the tubes is disposed inside of the conduit and within each of the apertures. The two windows are disposed in a face to face relationship on the ends of the viewing tubes and the entire assembly is hermetically sealed from the atmosphere whereby when 253.7 nm ultraviolet light is shone through one of the windows and detected through the other, the quantity of mercury which is passing by can be continuously monitored due to absorption which is indicated by attenuation of the amplitude of the observed emission. 4 figs.

  1. Efficient Mining and Detection of Sequential Intrusion Patterns for Network Intrusion Detection Systems

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli

    In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.

  2. The relationship between visual search and categorization of own- and other-age faces.

    PubMed

    Craig, Belinda M; Lipp, Ottmar V

    2018-03-13

    Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.

  3. Faces do not capture special attention in children with autism spectrum disorder: a change blindness study.

    PubMed

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.

  4. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings.

    PubMed

    Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L

    2010-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.

  5. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings

    PubMed Central

    Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.

    2009-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354

  6. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  7. Multirotor micro air vehicle autonomous landing system based on image markers recognition

    NASA Astrophysics Data System (ADS)

    Skoczylas, Marcin; Gadomer, Lukasz; Walendziuk, Wojciech

    2017-08-01

    In this paper the idea of an autonomic drone landing system which bases on different markers detection, is presented. The issue of safe autonomic drone landing is one of the major aspects connected with drone missions. The idea of the proposed system is to detect the landing place, marked with an image called marker, using one of the image recognition algorithms, and heading during the landing procedure to this place. Choosing the proper marker, which allows the greatest quality of the recognition system, is the main problem faced in this paper. Seven markers are tested and compared. The achieved results are described and discussed.

  8. Presenting a model for dynamic facial expression changes in detecting drivers’ drowsiness

    PubMed Central

    Karchani, Mohsen; Mazloumi, Adel; Saraji, Gebraeil Nasl; Gharagozlou, Faramarz; Nahvi, Ali; Haghighi, Khosro Sadeghniiat; Abadi, Bahador Makki; Foroshani, Abbas Rahimi

    2015-01-01

    Drowsiness while driving is a major cause of accidents. A driver fatigue detection system that is designed to sound an alarm, when appropriate, can prevent many accidents that sometime leads to the loss of life and property. In this paper, we classify drowsiness detection sensors and their strong and weak points. A compound model is proposed that uses image processing techniques to study the dynamic changes of the face to recognize drowsiness during driving. PMID:26120417

  9. Driver fatigue detection based on eye state.

    PubMed

    Lin, Lizong; Huang, Chao; Ni, Xiaopeng; Wang, Jiawen; Zhang, Hao; Li, Xiao; Qian, Zhiqin

    2015-01-01

    Nowadays, more and more traffic accidents occur because of driver fatigue. In order to reduce and prevent it, in this study, a calculation method using PERCLOS (percentage of eye closure time) parameter characteristics based on machine vision was developed. It determined whether a driver's eyes were in a fatigue state according to the PERCLOS value. The overall workflow solutions included face detection and tracking, detection and location of the human eye, human eye tracking, eye state recognition, and driver fatigue testing. The key aspects of the detection system incorporated the detection and location of human eyes and driver fatigue testing. The simplified method of measuring the PERCLOS value of the driver was to calculate the ratio of the eyes being open and closed with the total number of frames for a given period. If the eyes were closed more than the set threshold in the total number of frames, the system would alert the driver. Through many experiments, it was shown that besides the simple detection algorithm, the rapid computing speed, and the high detection and recognition accuracies of the system, the system was demonstrated to be in accord with the real-time requirements of a driver fatigue detection system.

  10. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  11. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  12. Preserved search asymmetry in the detection of fearful faces among neutral faces in individuals with Williams syndrome revealed by measurement of both manual responses and eye tracking.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2017-01-01

    Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.

  13. Fabrication of fiber-optic localized surface plasmon resonance sensor and its application to detect antibody-antigen reaction of interferon-gamma

    NASA Astrophysics Data System (ADS)

    Jeong, Hyeon-Ho; Erdene, Norov; Lee, Seung-Ki; Jeong, Dae-Hong; Park, Jae-Hyoung

    2011-12-01

    A fiber-optic localized surface plasmon (FO LSPR) sensor was fabricated by gold nanoparticles (Au NPs) immobilized on the end-face of an optical fiber. When Au NPs were formed on the end-face of an optical fiber by chemical reaction, Au NPs aggregation occurred and the Au NPs were immobilized in various forms such as monomers, dimers, trimers, etc. The component ratio of the Au NPs on the end-face of the fabricated FO LSPR sensor was slightly changed whenever the sensors were fabricated in the same condition. Including this phenomenon, the FO LSPR sensor was fabricated with high sensitivity by controlling the density of Au NPs. Also, the fabricated sensors were measured for the resonance intensity for the different optical systems and analyzed for the effect on sensitivity. Finally, for application as a biosensor, the sensor was used for detecting the antibody-antigen reaction of interferon-gamma.

  14. What makes a cell face-selective: the importance of contrast

    PubMed Central

    Ohayon, Shay; Freiwald, Winrich A; Tsao, Doris Y

    2012-01-01

    Summary Faces are robustly detected by computer vision algorithms that search for characteristic coarse contrast features. Here, we investigated whether face-selective cells in the primate brain exploit contrast features as well. We recorded from face-selective neurons in macaque inferotemporal cortex, while presenting a face-like collage of regions whose luminances were changed randomly. Modulating contrast combinations between regions induced activity changes ranging from no response to a response greater than that to a real face in 50% of cells. The critical stimulus factor determining response magnitude was contrast polarity, e.g., nose region brighter than left eye. Contrast polarity preferences were consistent across cells, suggesting a common computational strategy across the population, and matched features used by computer vision algorithms for face detection. Furthermore, most cells were tuned both for contrast polarity and for the geometry of facial features, suggesting cells encode information useful both for detection and recognition. PMID:22578507

  15. Effects of color information on face processing using event-related potentials and gamma oscillations.

    PubMed

    Minami, T; Goto, K; Kitazaki, M; Nakauchi, S

    2011-03-10

    In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  17. Skin Color Segmentation Using Coarse-to-Fine Region on Normalized RGB Chromaticity Diagram for Face Detection

    NASA Astrophysics Data System (ADS)

    Soetedjo, Aryuanto; Yamada, Koichi

    This paper describes a new color segmentation based on a normalized RGB chromaticity diagram for face detection. Face skin is extracted from color images using a coarse skin region with fixed boundaries followed by a fine skin region with variable boundaries. Two newly developed histograms that have prominent peaks of skin color and non-skin colors are employed to adjust the boundaries of the skin region. The proposed approach does not need a skin color model, which depends on a specific camera parameter and is usually limited to a particular environment condition, and no sample images are required. The experimental results using color face images of various races under varying lighting conditions and complex backgrounds, obtained from four different resources on the Internet, show a high detection rate of 87%. The results of the detection rate and computation time are comparable to the well known real-time face detection method proposed by Viola-Jones [11], [12].

  18. A new front-face optical cell for measuring weak fluorescent emissions with time resolution in the picosecond time scale.

    PubMed

    Gryczynski, Z; Bucci, E

    1993-11-01

    Recent developments of ultrafast fluorimeters allow measuring time-resolved fluorescence on the picosecond time scale. This implies one is able to monitor lifetimes and anisotropy decays of highly quenched systems and of systems that contain fluorophores having lifetimes in the subnanosecond range; both systems that emit weak signals. The combination of weak signals and very short lifetimes makes the measurements prone to distortions which are negligible in standard fluorescence experiments. To cope with these difficulties, we have designed a new optical cell for front-face optics which offers to the excitation beam a horizontal free liquid surface in the absence of interactions with optical windows. The new cell has been tested with probes of known lifetimes and anisotropies. It proved very useful in detecting tryptophan fluorescence in hemoglobin. If only diluted samples are available, which cannot be used in front-face optics, regular square geometry can still be utilized by inserting light absorbers into a cuvette of 1 cm path length.

  19. Multi-microphone adaptive array augmented with visual cueing.

    PubMed

    Gibson, Paul L; Hedin, Dan S; Davies-Venn, Evelyn E; Nelson, Peggy; Kramer, Kevin

    2012-01-01

    We present the development of an audiovisual array that enables hearing aid users to converse with multiple speakers in reverberant environments with significant speech babble noise where their hearing aids do not function well. The system concept consists of a smartphone, a smartphone accessory, and a smartphone software application. The smartphone accessory concept is a multi-microphone audiovisual array in a form factor that allows attachment to the back of the smartphone. The accessory will also contain a lower power radio by which it can transmit audio signals to compatible hearing aids. The smartphone software application concept will use the smartphone's built in camera to acquire images and perform real-time face detection using the built-in face detection support of the smartphone. The audiovisual beamforming algorithm uses the location of talking targets to improve the signal to noise ratio and consequently improve the user's speech intelligibility. Since the proposed array system leverages a handheld consumer electronic device, it will be portable and low cost. A PC based experimental system was developed to demonstrate the feasibility of an audiovisual multi-microphone array and these results are presented.

  20. Human face detection using motion and color information

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Gyun; Bang, Man-Won; Park, Soon-Young; Choi, Kyoung-Ho; Hwang, Jeong-Hyun

    2008-02-01

    In this paper, we present a hardware implementation of a face detector for surveillance applications. To come up with a computationally cheap and fast algorithm with minimal memory requirement, motion and skin color information are fused successfully. More specifically, a newly appeared object is extracted first by comparing average Hue and Saturation values of background image and a current image. Then, the result of skin color filtering of the current image is combined with the result of a newly appeared object. Finally, labeling is performed to locate a true face region. The proposed system is implemented on Altera Cyclone2 using Quartus II 6.1 and ModelSim 6.1. For hardware description language (HDL), Verilog-HDL is used.

  1. Face processing pattern under top-down perception: a functional MRI study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming

    2009-02-01

    Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.

  2. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  3. A Support System for the Electric Appliance Control Using Pose Recognition

    NASA Astrophysics Data System (ADS)

    Kawano, Takuya; Yamamoto, Kazuhiko; Kato, Kunihito; Hongo, Hitoshi

    In this paper, we propose an electric appliance control support system for aged and bedridden people using pose recognition. We proposed a pose recognition system that distinguishes between seven poses of the user on the bed. First, the face and arm regions of the user are detected by using the skin color. Our system focuses a recognition region surrounding the face region. Next, the higher order local autocorrelation features within the region are extracted. The linear discriminant analysis creates the coefficient matrix that can optimally distinguish among training data from the seven poses. Our algorithm can recognize the seven poses even if the subject wears different clothes and slightly shifts or slants on the bed. From the experimental results, our system achieved an accuracy rate of over 99 %. Then, we show that it possibles to construct one of a user-friendly system.

  4. Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.

    PubMed

    Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia

    2018-01-01

    It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.

  5. Investigating the Causal Role of rOFA in Holistic Detection of Mooney Faces and Objects: An fMRI-guided TMS Study.

    PubMed

    Bona, Silvia; Cattaneo, Zaira; Silvanto, Juha

    2016-01-01

    The right occipital face area (rOFA) is known to be involved in face discrimination based on local featural information. Whether this region is also involved in global, holistic stimulus processing is not known. We used fMRI-guided transcranial magnetic stimulation (TMS) to investigate whether rOFA is causally implicated in stimulus detection based on holistic processing, by the use of Mooney stimuli. Two studies were carried out: In Experiment 1, participants performed a detection task involving Mooney faces and Mooney objects; Mooney stimuli lack distinguishable local features and can be detected solely via holistic processing (i.e. at a global level) with top-down guidance from previously stored representations. Experiment 2 required participants to detect shapes which are recognized via bottom-up integration of local (collinear) Gabor elements and was performed to control for specificity of rOFA's implication in holistic detection. In Experiment 1, TMS over rOFA and rLO impaired detection of all stimulus categories, with no category-specific effect. In Experiment 2, shape detection was impaired when TMS was applied over rLO but not over rOFA. Our results demonstrate that rOFA is causally implicated in the type of top-down holistic detection required by Mooney stimuli and that such role is not face-selective. In contrast, rOFA does not appear to play a causal role in detection of shapes based on bottom-up integration of local components, demonstrating that its involvement in processing non-face stimuli is specific for holistic processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Emergency Face-Mask Removal Effectiveness: A Comparison of Traditional and Nontraditional Football Helmet Face-Mask Attachment Systems

    PubMed Central

    Swartz, Erik E.; Belmore, Keith; Decoster, Laura C.; Armstrong, Charles W.

    2010-01-01

    Abstract Context: Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. Objective: To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Design: Controlled laboratory study. Setting: Applied biomechanics laboratory. Participants: Twenty-five certified athletic trainers. Intervention(s): The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Main Outcome Measure(s): Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α  =  .05) with follow-up comparisons to test for differences. Results: Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F4,20  =  48.87, P  =  .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F4,20  =  16.25, P  =  .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected in RPE during face-mask removal (F4,20  =  43.20, P  =  .001), with participants reporting average perceived difficulty ranging from 1.44 ± 1.19 for QR to 3.68 ± 1.70 for TradAlt. Conclusions: The QR and Trad trials resulted in superior results. When trials required cutting loop straps, results deteriorated. PMID:21062179

  7. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  8. Effects of threshold on single-target detection by using modified amplitude-modulated joint transform correlator

    NASA Astrophysics Data System (ADS)

    Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun

    2007-03-01

    Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.

  9. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  10. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  11. What Faces Reveal: A Novel Method to Identify Patients at Risk of Deterioration Using Facial Expressions.

    PubMed

    Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo

    2018-07-01

    To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.

  12. Dynamic MRI-based computer aided diagnostic systems for early detection of kidney transplant rejection: A survey

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Khalifa, Fahmi; Alansary, Amir; Soliman, Ahmed; Gimel'farb, Georgy; El-Baz, Ayman

    2013-10-01

    Early detection of renal transplant rejection is important to implement appropriate medical and immune therapy in patients with transplanted kidneys. In literature, a large number of computer-aided diagnostic (CAD) systems using different image modalities, such as ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), and radionuclide imaging, have been proposed for early detection of kidney diseases. A typical CAD system for kidney diagnosis consists of a set of processing steps including: motion correction, segmentation of the kidney and/or its internal structures (e.g., cortex, medulla), construction of agent kinetic curves, functional parameter estimation, diagnosis, and assessment of the kidney status. In this paper, we survey the current state-of-the-art CAD systems that have been developed for kidney disease diagnosis using dynamic MRI. In addition, the paper addresses several challenges that researchers face in developing efficient, fast and reliable CAD systems for the early detection of kidney diseases.

  13. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation.

    PubMed

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing.

  14. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation

    PubMed Central

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing. PMID:29503612

  15. Pediatricians’ and health visitors’ views towards detection and management of maternal depression in the context of a weak primary health care system: a qualitative study

    PubMed Central

    2014-01-01

    Background The present study’s aim has been to investigate, identify and interpret the views of pediatric primary healthcare providers on the recognition and management of maternal depression in the context of a weak primary healthcare system. Methods Twenty six pediatricians and health visitors were selected by using purposive sampling. Face to face in-depth interviews of approximately 45 minutes duration were conducted. The data were analyzed by using the framework analysis approach which includes five main steps: familiarization, identifying a thematic framework, indexing, charting, mapping and interpretation. Results Fear of stigmatization came across as a key barrier for detection and management of maternal depression. Pediatric primary health care providers linked their hesitation to start a conversation about depression with stigma. They highlighted that mothers were not receptive to discussing depression and accepting a referral. It was also revealed that the fragmented primary health care system and the lack of collaboration between health and mental health services have resulted in an unfavorable situation towards maternal mental health. Conclusions Even though pediatricians and health visitors are aware about maternal depression and the importance of maternal mental health, however they fail to implement detection and management practices successfully. The inefficiently decentralized psychiatric services but also stigmatization and misconceptions about maternal depression have impeded the integration of maternal mental health into primary care and prevent pediatric primary health care providers from implementing detection and management practices. PMID:24725738

  16. Dissociation of face-selective cortical responses by attention.

    PubMed

    Furey, Maura L; Tanskanen, Topi; Beauchamp, Michael S; Avikainen, Sari; Uutela, Kimmo; Hari, Riitta; Haxby, James V

    2006-01-24

    We studied attentional modulation of cortical processing of faces and houses with functional MRI and magnetoencephalography (MEG). MEG detected an early, transient face-selective response. Directing attention to houses in "double-exposure" pictures of superimposed faces and houses strongly suppressed the characteristic, face-selective functional MRI response in the fusiform gyrus. By contrast, attention had no effect on the M170, the early, face-selective response detected with MEG. Late (>190 ms) category-related MEG responses elicited by faces and houses, however, were strongly modulated by attention. These results indicate that hemodynamic and electrophysiological measures of face-selective cortical processing complement each other. The hemodynamic signals reflect primarily late responses that can be modulated by feedback connections. By contrast, the early, face-specific M170 that was not modulated by attention likely reflects a rapid, feed-forward phase of face-selective processing.

  17. Attractiveness judgments and discrimination of mommies and grandmas: perceptual tuning for young adult faces.

    PubMed

    Short, Lindsey A; Mondloch, Catherine J; Hackland, Anne T

    2015-01-01

    Adults are more accurate in detecting deviations from normality in young adult faces than in older adult faces despite exhibiting comparable accuracy in discriminating both face ages. This deficit in judging the normality of older faces may be due to reliance on a face space optimized for the dimensions of young adult faces, perhaps because of early and continuous experience with young adult faces. Here we examined the emergence of this young adult face bias by testing 3- and 7-year-old children on a child-friendly version of the task used to test adults. In an attractiveness judgment task, children viewed young and older adult face pairs; each pair consisted of an unaltered face and a distorted face of the same identity. Children pointed to the prettiest face, which served as a measure of their sensitivity to the dimensions on which faces vary relative to a norm. To examine whether biases in the attractiveness task were specific to deficits in referencing a norm or extended to impaired discrimination, we tested children on a simultaneous match-to-sample task with the same stimuli. Both age groups were more accurate in judging the attractiveness of young faces relative to older faces; however, unlike adults, the young adult face bias extended to the match-to-sample task. These results suggest that by 3 years of age, children's perceptual system is more finely tuned for young adult faces than for older adult faces, which may support past findings of superior recognition for young adult faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Sensitivity and Specificity of OCT Angiography to Detect Choroidal Neovascularization.

    PubMed

    Faridi, Ambar; Jia, Yali; Gao, Simon S; Huang, David; Bhavsar, Kavita V; Wilson, David J; Sill, Andrew; Flaxel, Christina J; Hwang, Thomas S; Lauer, Andreas K; Bailey, Steven T

    2017-01-01

    To determine the sensitivity and specificity of optical coherence tomography angiography (OCTA) in the detection of choroidal neovascularization (CNV) in age-related macular degeneration (AMD). Prospective case series. Prospective series of seventy-two eyes were studied, which included eyes with treatment-naive CNV due to AMD, non-neovascular AMD, and normal controls. All eyes underwent OCTA with a spectral domain (SD) OCT (Optovue, Inc.). The 3D angiogram was segmented into separate en face views including the inner retinal angiogram, outer retinal angiogram, and choriocapillaris angiogram. Detection of abnormal flow in the outer retina served as candidate CNV with OCTA. Masked graders reviewed structural OCT alone, en face OCTA alone, and en face OCTA combined with cross-sectional OCTA for the presence of CNV. The sensitivity and specificity of CNV detection compared to the gold standard of fluorescein angiography (FA) and OCT was determined for structural SD-OCT alone, en face OCTA alone, and with en face OCTA combined with cross-sectional OCTA. Of 32 eyes with CNV, both graders identified 26 true positives with en face OCTA alone, resulting in a sensitivity of 81.3%. Four of the 6 false negatives had large subretinal hemorrhage (SRH) and sensitivity improved to 94% for both graders if eyes with SRH were excluded. The addition of cross-sectional OCTA along with en face OCTA improved the sensitivity to 100% for both graders. Structural OCT alone also had a sensitivity of 100%. The specificity of en face OCTA alone was 92.5% for grader A and 97.5% for grader B. The specificity of structural OCT alone was 97.5% for grader A and 85% for grader B. Cross-sectional OCTA combined with en face OCTA had a specificity of 97.5% for grader A and 100% for grader B. Sensitivity and specificity for CNV detection with en face OCTA combined with cross-sectional OCTA approaches that of the gold standard of FA with OCT, and it is better than en face OCTA alone. Structural OCT alone has excellent sensitivity for CNV detection. False positives from structural OCT can be mitigated with the addition of flow information with OCTA.

  19. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  20. Fetal MRI: head and neck.

    PubMed

    Mirsky, David M; Shekdar, Karuna V; Bilaniuk, Larissa T

    2012-08-01

    Abnormalities of the fetal head and neck may be seen in isolation or in association with central nervous system abnormalities, chromosomal abnormalities, and syndromes. Magnetic resonance imaging (MRI) plays an important role in detecting associated abnormalities of the brain as well as in evaluating for airway obstruction that may impact prenatal management and delivery planning. This article provides an overview of the common indications for MRI of the fetal head and neck, including abnormalities of the fetal skull and face, masses of the face and neck, and fetal goiter. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  2. Head pose estimation in computer vision: a survey.

    PubMed

    Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai

    2009-04-01

    The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.

  3. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  4. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  5. Multimodal ophthalmic imaging using swept source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.

    2016-03-01

    Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.

  6. Operational analysis for the drug detection problem

    NASA Astrophysics Data System (ADS)

    Hoopengardner, Roger L.; Smith, Michael C.

    1994-10-01

    New techniques and sensors to identify the molecular, chemical, or elemental structures unique to drugs are being developed under several national programs. However, the challenge faced by U.S. drug enforcement and Customs officials goes far beyond the simple technical capability to detect an illegal drug. Entry points into the U.S. include ports, border crossings, and airports where cargo ships, vehicles, and aircraft move huge volumes of freight. Current technology and personnel are able to physically inspect only a small fraction of the entering cargo containers. The complexities of how to best utilize new technology to aid the detection process and yet not adversely affect the processing of vehicles and time-sensitive cargo is the challenge faced by these officials. This paper describes an ARPA sponsored initiative to develop a simple, yet useful, method for examining the operational consequences of utilizing various procedures and technologies in combination to achieve an `acceptable' level of detection probability. Since Customs entry points into the U.S. vary from huge seaports to a one lane highway checkpoint between the U.S. and Canadian or Mexico border, no one system can possibly be right for all points. This approach can examine alternative concepts for using different techniques/systems for different types of entry points. Operational measures reported include the average time to process vehicles and containers, the average and maximum numbers in the system at any time, and the utilization of inspection teams. The method is implemented via a PC-based simulation written in GPSS-PC language. Input to the simulation model is (1) the individual detection probabilities and false positive rates for each detection technology or procedure, (2) the inspection time for each procedure, (3) the system configuration, and (4) the physical distance between inspection stations. The model offers on- line graphics to examine effects as the model runs.

  7. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    NASA Astrophysics Data System (ADS)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  8. Surface electromyographic mapping of the orbicularis oculi muscle for real-time blink detection.

    PubMed

    Frigerio, Alice; Cavallari, Paolo; Frigeni, Marta; Pedrocchi, Alessandra; Sarasola, Andrea; Ferrante, Simona

    2014-01-01

    Facial paralysis is a life-altering condition that significantly impairs function, appearance, and communication. Facial rehabilitation via closed-loop pacing represents a potential but as yet theoretical approach to reanimation. A first critical step toward closed-loop facial pacing in cases of unilateral paralysis is the detection of healthy movements to use as a trigger to prosthetically elicit automatic artificial movements on the contralateral side of the face. To test and to maximize the performance of an electromyography (EMG)-based blink detection system for applications in closed-loop facial pacing. Blinking was detected across the periocular region by means of multichannel surface EMG at an academic neuroengineering and medical robotics laboratory among 15 healthy volunteers. Real-time blink detection was accomplished by mapping the surface of the orbicularis oculi muscle on one side of the face with a multichannel surface EMG. The biosignal from each channel was independently processed; custom software registered a blink when an amplitude-based or slope-based suprathreshold activity was detected. The experiments were performed when participants were relaxed and during the production of particular orofacial movements. An F1 score metric was used to analyze software performance in detecting blinks. The maximal software performance was achieved when a blink was recorded from the superomedial orbit quadrant. At this recording location, the median F1 scores were 0.89 during spontaneous blinking, 0.82 when chewing gum, 0.80 when raising the eyebrows, and 0.70 when smiling. The overall performance of blink detection was significantly better at the superomedial quadrant (F1 score, 0.75) than at the traditionally used inferolateral quadrant (F1 score, 0.40) (P < .05). Electromyographic recording represents an accurate tool to detect spontaneous blinks as part of closed-loop facial pacing systems. The early detection of blink activity may allow real-time pacing via rapid triggering of contralateral muscles. Moreover, an EMG detection system can be integrated in external devices and in implanted neuroprostheses. A potential downside to this approach involves cross talk from adjacent muscles, which can be notably reduced by recording from the superomedial quadrant of the orbicularis oculi muscle and by applying proper signal processing. NA.

  9. Attention focussing and anomaly detection in real-time systems monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.; Chien, Steve A.; Fayyad, Usama M.; Porta, Harry J.

    1993-01-01

    In real-time monitoring situations, more information is not necessarily better. When faced with complex emergency situations, operators can experience information overload and a compromising of their ability to react quickly and correctly. We describe an approach to focusing operator attention in real-time systems monitoring based on a set of empirical and model-based measures for determining the relative importance of sensor data.

  10. Manipulation Detection and Preference Alterations in a Choice Blindness Paradigm

    PubMed Central

    Taya, Fumihiko; Gupta, Swati; Farber, Ilya; Mullette-Gillman, O'Dhaniel A.

    2014-01-01

    Objectives It is commonly believed that individuals make choices based upon their preferences and have access to the reasons for their choices. Recent studies in several areas suggest that this is not always the case. In choice blindness paradigms, two-alternative forced-choice in which chosen-options are later replaced by the unselected option, individuals often fail to notice replacement of their chosen option, confabulate explanations for why they chose the unselected option, and even show increased preferences for the unselected-but-replaced options immediately after choice (seconds). Although choice blindness has been replicated across a variety of domains, there are numerous outstanding questions. Firstly, we sought to investigate how individual- or trial-factors modulated detection of the manipulations. Secondly, we examined the nature and temporal duration (minutes vs. days) of the preference alterations induced by these manipulations. Methods Participants performed a computerized choice blindness task, selecting the more attractive face between presented pairs of female faces, and providing a typewritten explanation for their choice on half of the trials. Chosen-face cue manipulations were produced on a subset of trials by presenting the unselected face during the choice explanation as if it had been selected. Following all choice trials, participants rated the attractiveness of each face individually, and rated the similarity of each face pair. After approximately two weeks, participants re-rated the attractiveness of each individual face online. Results Participants detected manipulations on only a small proportion of trials, with detections by fewer than half of participants. Detection rates increased with the number of prior detections, and detection rates subsequent to first detection were modulated by the choice certainty. We show clear short-term modulation of preferences in both manipulated and non-manipulated explanation trials compared to choice-only trials (with opposite directions of effect). Preferences were altered in the direction that subjects were led to believe they selected. PMID:25247886

  11. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    NASA Astrophysics Data System (ADS)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  13. Individual differences in anxiety predict neural measures of visual working memory for untrustworthy faces.

    PubMed

    Meconi, Federica; Luria, Roy; Sessa, Paola

    2014-12-01

    When facing strangers, one of the first evaluations people perform is to implicitly assess their trustworthiness. However, the underlying processes supporting trustworthiness appraisal are poorly understood. We hypothesized that visual working memory (VWM) maintains online face representations that are sensitive to physical cues of trustworthiness, and that differences among individuals in representing untrustworthy faces are associated with individual differences in anxiety. Participants performed a change detection task that required encoding and maintaining for a short interval the identity of one face parametrically manipulated to be either trustworthy or untrustworthy. The sustained posterior contralateral negativity (SPCN), an event-related component (ERP) time-locked to the onset of the face, was used to index the resolution of face representations in VWM. Results revealed greater SPCN amplitudes for trustworthy faces when compared with untrustworthy faces, indicating that VWM is sensitive to physical cues of trustworthiness, even in the absence of explicit trustworthiness appraisal. In addition, differences in SPCN amplitude between trustworthy and untrustworthy faces correlated with participants' anxiety, indicating that healthy college students with sub-clinical high anxiety levels represented untrustworthy faces in greater detail compared with students with sub-clinical low anxiety levels. This pattern of findings is discussed in terms of the high flexibility of aversive/avoidance and appetitive/approach motivational systems. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. The structural and functional correlates of the efficiency in fearful face detection.

    PubMed

    Wang, Yongchao; Guo, Nana; Zhao, Li; Huang, Hui; Yao, Xiaonan; Sang, Na; Hou, Xin; Mao, Yu; Bi, Taiyong; Qiu, Jiang

    2017-06-01

    Human visual system is found to be much efficient in searching for a fearful face. Some individuals are more sensitive to this threat-related stimulus. However, we still know little about the neural correlates of such variability. In the current study, we exploited a visual search paradigm, and asked the subjects to search for a fearful face or a target gender. Every subject showed a shallower search function for fearful face search than face gender search, indicating a stable fearful face advantage. We then used voxel-based morphometry (VBM) analysis and correlated this advantage to the gray matter volume (GMV) of some presumably face related cortical areas. The result revealed that only the left fusiform gyrus showed a significant positive correlation. Next, we defined the left fusiform gyrus as the seed region and calculated its resting state functional connectivity to the whole brain. Correlations were also calculated between fearful face advantage and these connectivities. In this analysis, we found positive correlations in the inferior parietal lobe and the ventral medial prefrontal cortex. These results suggested that the anatomical structure of the left fusiform gyrus might determine the search efficiency of fearful face, and frontoparietal attention network involved in this process through top-down attentional modulation. Copyright © 2017. Published by Elsevier Ltd.

  15. [Quartz-enhanced photoacoustic spectroscopy trace gas detection system based on the Fabry-Perot demodulation].

    PubMed

    Lin, Cheng; Zhu, Yong; Wei, Wei; Zhang, Jie; Tian, Li; Xu, Zu-Wen

    2013-05-01

    An all-optical quartz-enhanced photoacoustic spectroscopy system, based on the F-P demodulation, for trace gas detection in the open environment was proposed. In quartz-enhanced photoacoustic spectroscopy (QEPAS), an optical fiber Fabry-Perot method was used to replace the conventional electronic demodulation method. The photoacoustic signal was obtained by demodulating the variation of the Fabry-Perot cavity between the quartz tuning fork side and the fiber face. An experimental system was setup. The experiment for detection of water vapour in the open environment was carried on. A normalized noise equivalent absorption coefficient of 2.80 x 10(-7) cm(-1) x W x Hz(-1/2) was achieved. The result demonstrated that the sensitivity of the all-optical quartz-enhanced photoacoustic spectroscopy system is about 2.6 times higher than that of the conventional QEPAS system. The all-optical quartz-enhanced photoacoustic spectroscopy system is immune to electromagnetic interference, safe in flammable and explosive gas detection, suitable for high temperature and high humidity environments and realizable for long distance, multi-point and network sensing.

  16. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  17. Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.

    PubMed

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2011-01-01

    In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.

  18. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  19. Flexibility in Visual Working Memory: Accurate Change Detection in the Face of Irrelevant Variations in Position

    PubMed Central

    Woodman, Geoffrey F.; Vogel, Edward K.; Luck, Steven J.

    2012-01-01

    Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in prior studies and demonstrate that the visual working memory system can detect changes in object identity across spatial transformations. PMID:22287933

  20. Faces Do Not Capture Special Attention in Children with Autism Spectrum Disorder: A Change Blindness Study

    ERIC Educational Resources Information Center

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas…

  1. Graph distance for complex networks

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Hirata, Yoshito; Ikeguchi, Tohru; Aihara, Kazuyuki

    2016-10-01

    Networks are widely used as a tool for describing diverse real complex systems and have been successfully applied to many fields. The distance between networks is one of the most fundamental concepts for properly classifying real networks, detecting temporal changes in network structures, and effectively predicting their temporal evolution. However, this distance has rarely been discussed in the theory of complex networks. Here, we propose a graph distance between networks based on a Laplacian matrix that reflects the structural and dynamical properties of networked dynamical systems. Our results indicate that the Laplacian-based graph distance effectively quantifies the structural difference between complex networks. We further show that our approach successfully elucidates the temporal properties underlying temporal networks observed in the context of face-to-face human interactions.

  2. VidCat: an image and video analysis service for personal media management

    NASA Astrophysics Data System (ADS)

    Begeja, Lee; Zavesky, Eric; Liu, Zhu; Gibbon, David; Gopalan, Raghuraman; Shahraray, Behzad

    2013-03-01

    Cloud-based storage and consumption of personal photos and videos provides increased accessibility, functionality, and satisfaction for mobile users. One cloud service frontier that is recently growing is that of personal media management. This work presents a system called VidCat that assists users in the tagging, organization, and retrieval of their personal media by faces and visual content similarity, time, and date information. Evaluations for the effectiveness of the copy detection and face recognition algorithms on standard datasets are also discussed. Finally, the system includes a set of application programming interfaces (API's) allowing content to be uploaded, analyzed, and retrieved on any client with simple HTTP-based methods as demonstrated with a prototype developed on the iOS and Android mobile platforms.

  3. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  4. Behavioral and facial thermal variations in 3-to 4-month-old infants during the Still-Face Paradigm

    PubMed Central

    Aureli, Tiziana; Grazia, Annalisa; Cardone, Daniela; Merla, Arcangelo

    2015-01-01

    Behavioral and facial thermal responses were recorded in twelve 3- to 4-month-old infants during the Still-Face Paradigm (SFP). As in the usual procedure, infants were observed in a three-step, face-to-face interaction: a normal interaction episode (3 min); the “still-face” episode in which the mother became unresponsive and assumed a neutral expression (1 min); a reunion episode in which the mother resumed the interaction (3 min). A fourth step that consisted of a toy play episode (5 min) was added for our own research interest. We coded the behavioral responses through the Infant and Caregiver Engagement Phases system, and recorded facial skin temperature via thermal infrared (IR) imaging. Comparing still-face episode to play episode, the infants’ communicative engagement decreased, their engagement with the environment increased, and no differences emerged in self-regulatory and protest behaviors. We also found that facial skin temperature increased. For the behavioral results, infants recognized the interruption of the interactional reciprocity caused by the still-face presentation, without showing upset behaviors. According to autonomic results, the parasympathetic system was more active than the sympathetic, as usually happens in aroused but not distressed situations. With respect to the debate about the causal factor of the still-face effect, thermal data were consistent with behavioral data in showing this effect as related to the infants’ expectations of the nature of the social interactions being violated. Moreover, as these are associated to the infants’ subsequent interest in the environment, they indicate the thermal IR imaging as a reliable technique for the detection of physiological variations not only in the emotional system, as indicated by research to date, but also in the attention system. Using this technique for the first time during the SFP allowed us to record autonomic data in a more ecological manner than in previous studies. PMID:26528229

  5. Preliminary evidence that different mechanisms underlie the anger superiority effect in children with and without Autism Spectrum Disorders

    PubMed Central

    Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo

    2014-01-01

    Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477

  6. An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring.

    PubMed

    Zhao, Yifan; Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros

    2017-11-22

    Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers' behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.

  7. Motion Planning in a Society of Intelligent Mobile Agents

    NASA Technical Reports Server (NTRS)

    Esterline, Albert C.; Shafto, Michael (Technical Monitor)

    2002-01-01

    The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.

  8. Alternative face models for 3D face registration

    NASA Astrophysics Data System (ADS)

    Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale

    2007-01-01

    3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.

  9. External and internal facial features modulate processing of vertical but not horizontal spatial relations.

    PubMed

    Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte

    2018-03-22

    Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.

  10. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  11. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.

  12. Hyper-realistic face masks: a new challenge in person identification.

    PubMed

    Sanders, Jet Gabrielle; Ueda, Yoshiyuki; Minemoto, Kazusa; Noyes, Eilidh; Yoshikawa, Sakiko; Jenkins, Rob

    2017-01-01

    We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.

  13. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    PubMed

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  14. Visual search for faces by race: a cross-race study.

    PubMed

    Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun

    2013-08-30

    Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

  16. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  17. Tools for Protecting the Privacy of Specific Individuals in Video

    NASA Astrophysics Data System (ADS)

    Chen, Datong; Chang, Yi; Yan, Rong; Yang, Jie

    2007-12-01

    This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity/behavior analysis.

  18. Optical coherence tomography for the structural changes detection in aging skin

    NASA Astrophysics Data System (ADS)

    Cheng, Chih-Ming; Chang, Yu-Fen; Chiang, Hung-Chih; Chang, Chir-Weei

    2018-01-01

    Optical coherence tomography (OCT) technique is an extremely powerful tool to detect numerous ophthalmological disorders, such as retinal disorder, and can be applied on other fields. Thus, many OCT systems are developed. For assessment of the skin textures, a cross-sectional (B-scan) spectra domain OCT system is better than an en-face one. However, this kind of commercial OCT system is not available. We designed a brand-new probe of commercial OCT system for evaluating skin texture without destroying the original instrument and it can be restored in 5 minutes. This modification of OCT system retains the advantages of commercial instrument, such as reliable, stable, and safe. Furthermore, the structural changes in aging skin are easily obtained by means of our probe, including larger pores, thinning of the dermis, collagen volume loss, vessel atrophy and flattening of dermal-epidermal junction. We can use this OCT technique in the field of cosmetic medicine such as detecting the skin textures and skin care product effect followup.

  19. An ERP study of famous face incongruity detection in middle age.

    PubMed

    Chaby, L; Jemel, B; George, N; Renault, B; Fiori, N

    2001-04-01

    Age-related changes in famous face incongruity detection were examined in middle-aged (mean = 50.6) and young (mean = 24.8) subjects. Behavioral and ERP responses were recorded while subjects, after a presentation of a "prime face" (a famous person with the eyes masked), had to decide whether the following "test face" was completed with its authentic eyes (congruent) or with other eyes (incongruent). The principal effects of advancing age were (1) behavioral difficulties in discriminating between incongruent and congruent faces; (2) a reduced N400 effect due to N400 enhancement for both congruent and incongruent faces; (3) a latency increase of both N400 and P600 components. ERPs to primes (face encoding) were not affected by aging. These results are interpreted in terms of early signs of aging. Copyright 2001 Academic Press.

  20. Optical Security System Based on the Biometrics Using Holographic Storage Technique with a Simple Data Format

    NASA Astrophysics Data System (ADS)

    Jun, An Won

    2006-01-01

    We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.

  1. An experimental and computational investigation of electrical resistivity imaging for prediction ahead of tunnel boring machines

    NASA Astrophysics Data System (ADS)

    Schaeffer, Kevin P.

    Tunnel boring machines (TBMs) are routinely used for the excavation of tunnels across a range of ground conditions, from hard rock to soft ground. In complex ground conditions and in urban environments, the TBM susceptible to damage due to uncertainty of what lies ahead of the tunnel face. The research presented here explores the application of electrical resistivity theory for use in the TBM tunneling environment to detect changing conditions ahead of the machine. Electrical resistivity offers a real-time and continuous imaging solution to increase the resolution of information along the tunnel alignment and may even unveil previously unknown geologic or man-made features ahead of the TBM. The studies presented herein, break down the tunneling environment and the electrical system to understand how its fundamental parameters can be isolated and tested, identifying how they influence the ability to predict changes ahead of the tunnel face. A proof-of-concept, scaled experimental model was constructed in order assess the ability of the model to predict a metal pipe (or rod) ahead of face as the TBM excavates through a saturated sand. The model shows that a prediction of up to three tunnel diameters could be achieved, but the unique presence of the pipe (or rod) could not be concluded with certainty. Full scale finite element models were developed in order evaluate the various influences on the ability to detect changing conditions ahead of the face. Results show that TBM/tunnel geometry, TBM type, and electrode geometry can drastically influence prediction ahead of the face by tens of meters. In certain conditions (i.e., small TBM diameter, low cover depth, large material contrasts), changes can be detected over 100 meters in front of the TBM. Various electrode arrays were considered and show that in order to better detect more finite differences (e.g., boulder, lens, pipe), the use of individual cutting tools as electrodes is highly advantageous to increase spatial resolution and current density close to the cutterhead.

  2. Low-complexity object detection with deep convolutional neural network for embedded systems

    NASA Astrophysics Data System (ADS)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  3. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  4. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  5. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  6. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  7. Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.

    PubMed

    Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R

    2014-01-01

    Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.

  8. Tracking and Counting Motion for Monitoring Food Intake Based-On Depth Sensor and UDOO Board: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Kassim, Muhammad Fuad bin; Norzali Haji Mohd, Mohd

    2017-08-01

    Technology is all about helping people, which created a new opportunity to take serious action in managing their health care. Moreover, Obesity continues to be a serious public health concern in the Malaysia and continuing to rise. Obesity has been a serious health concern among people. Nearly half of Malaysian people overweight. Most of dietary approach is not tracking and detecting the right calorie intake for weight loss, but currently used tools such as food diaries require users to manually record and track the food calories, making them difficult for daily use. We will be developing a new tool that counts the food intake bite by monitoring hand gesture and face jaw motion movement of caloric intake. The Bite count method showed a good significant that can lead to a successful weight loss by simply monitoring the bite taken during eating. The device used was Kinect Xbox One which used a depth camera to detect the motion on person hand and face during food intake. Previous studies showed that most of the method used to count bite device is worn type. The recent trend is now going towards non-wearable devices due to the difficulty when wearing devices and it has high false alarm ratio. The proposed system gets data from the Kinect that will be monitoring the hand and face gesture of the user while eating. Then, the gesture of hand and face data is sent to the microcontroller board to recognize and start counting bite taken by the user. The system recognizes the patterns of bite taken from user by following the algorithm of basic eating type either using hand or chopstick. This system can help people who are trying to follow a proper way to reduce overweight or eating disorders by monitoring their meal intake and controlling eating rate.

  9. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  10. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    ERIC Educational Resources Information Center

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  11. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    PubMed

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  13. Current development of UAV sense and avoid system

    NASA Astrophysics Data System (ADS)

    Zhahir, A.; Razali, A.; Mohd Ajir, M. R.

    2016-10-01

    As unmanned aerial vehicles (UAVs) are now gaining high interests from civil and commercialised market, the automatic sense and avoid (SAA) system is currently one of the essential features in research spotlight of UAV. Several sensor types employed in current SAA research and technology of sensor fusion that offers a great opportunity in improving detection and tracking system are presented here. The purpose of this paper is to provide an overview of SAA system development in general, as well as the current challenges facing UAV researchers and designers.

  14. Neural Correlates of Face and Object Perception in an Awake Chimpanzee (Pan Troglodytes) Examined by Scalp-Surface Event-Related Potentials

    PubMed Central

    Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu

    2010-01-01

    Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284

  15. Individual differences in bodily freezing predict emotional biases in decision making

    PubMed Central

    Ly, Verena; Huys, Quentin J. M.; Stins, John F.; Roelofs, Karin; Cools, Roshan

    2014-01-01

    Instrumental decision making has long been argued to be vulnerable to emotional responses. Literature on multiple decision making systems suggests that this emotional biasing might reflect effects of a system that regulates innately specified, evolutionarily preprogrammed responses. To test this hypothesis directly, we investigated whether effects of emotional faces on instrumental action can be predicted by effects of emotional faces on bodily freezing, an innately specified response to aversive relative to appetitive cues. We tested 43 women using a novel emotional decision making task combined with posturography, which involves a force platform to detect small oscillations of the body to accurately quantify postural control in upright stance. On the platform, participants learned whole body approach-avoidance actions based on monetary feedback, while being primed by emotional faces (angry/happy). Our data evidence an emotional biasing of instrumental action. Thus, angry relative to happy faces slowed instrumental approach relative to avoidance responses. Critically, individual differences in this emotional biasing effect were predicted by individual differences in bodily freezing. This result suggests that emotional biasing of instrumental action involves interaction with a system that controls innately specified responses. Furthermore, our findings help bridge (animal and human) decision making and emotion research to advance our mechanistic understanding of decision making anomalies in daily encounters as well as in a wide range of psychopathology. PMID:25071491

  16. Face mask sampling for the detection of Mycobacterium tuberculosis in expelled aerosols.

    PubMed

    Williams, Caroline M L; Cheah, Eddy S G; Malkin, Joanne; Patel, Hemu; Otu, Jacob; Mlaga, Kodjovi; Sutherland, Jayne S; Antonio, Martin; Perera, Nelun; Woltmann, Gerrit; Haldar, Pranabashis; Garton, Natalie J; Barer, Michael R

    2014-01-01

    Although tuberculosis is transmitted by the airborne route, direct information on the natural output of bacilli into air by source cases is very limited. We sought to address this through sampling of expelled aerosols in face masks that were subsequently analyzed for mycobacterial contamination. In series 1, 17 smear microscopy positive patients wore standard surgical face masks once or twice for periods between 10 minutes and 5 hours; mycobacterial contamination was detected using a bacteriophage assay. In series 2, 19 patients with suspected tuberculosis were studied in Leicester UK and 10 patients with at least one positive smear were studied in The Gambia. These subjects wore one FFP30 mask modified to contain a gelatin filter for one hour; this was subsequently analyzed by the Xpert MTB/RIF system. In series 1, the bacteriophage assay detected live mycobacteria in 11/17 patients with wearing times between 10 and 120 minutes. Variation was seen in mask positivity and the level of contamination detected in multiple samples from the same patient. Two patients had non-tuberculous mycobacterial infections. In series 2, 13/20 patients with pulmonary tuberculosis produced positive masks and 0/9 patients with extrapulmonary or non-tuberculous diagnoses were mask positive. Overall, 65% of patients with confirmed pulmonary mycobacterial infection gave positive masks and this included 3/6 patients who received diagnostic bronchoalveolar lavages. Mask sampling provides a simple means of assessing mycobacterial output in non-sputum expectorant. The approach shows potential for application to the study of airborne transmission and to diagnosis.

  17. Face Mask Sampling for the Detection of Mycobacterium tuberculosis in Expelled Aerosols

    PubMed Central

    Malkin, Joanne; Patel, Hemu; Otu, Jacob; Mlaga, Kodjovi; Sutherland, Jayne S.; Antonio, Martin; Perera, Nelun; Woltmann, Gerrit; Haldar, Pranabashis; Garton, Natalie J.; Barer, Michael R.

    2014-01-01

    Background Although tuberculosis is transmitted by the airborne route, direct information on the natural output of bacilli into air by source cases is very limited. We sought to address this through sampling of expelled aerosols in face masks that were subsequently analyzed for mycobacterial contamination. Methods In series 1, 17 smear microscopy positive patients wore standard surgical face masks once or twice for periods between 10 minutes and 5 hours; mycobacterial contamination was detected using a bacteriophage assay. In series 2, 19 patients with suspected tuberculosis were studied in Leicester UK and 10 patients with at least one positive smear were studied in The Gambia. These subjects wore one FFP30 mask modified to contain a gelatin filter for one hour; this was subsequently analyzed by the Xpert MTB/RIF system. Results In series 1, the bacteriophage assay detected live mycobacteria in 11/17 patients with wearing times between 10 and 120 minutes. Variation was seen in mask positivity and the level of contamination detected in multiple samples from the same patient. Two patients had non-tuberculous mycobacterial infections. In series 2, 13/20 patients with pulmonary tuberculosis produced positive masks and 0/9 patients with extrapulmonary or non-tuberculous diagnoses were mask positive. Overall, 65% of patients with confirmed pulmonary mycobacterial infection gave positive masks and this included 3/6 patients who received diagnostic bronchoalveolar lavages. Conclusion Mask sampling provides a simple means of assessing mycobacterial output in non-sputum expectorant. The approach shows potential for application to the study of airborne transmission and to diagnosis. PMID:25122163

  18. A face in a (temporal) crowd.

    PubMed

    Hacker, Catrina M; Meschke, Emily X; Biederman, Irving

    2018-03-20

    Familiar objects, specified by name, can be identified with high accuracy when embedded in a rapidly presented sequence of images at rates exceeding 10 images/s. Not only can target objects be detected at such brief presentation rates, they can also be detected under high uncertainty, where their classification is defined negatively, e.g., "Not a Tool." The identification of a familiar speaker's voice declines precipitously when uncertainty is increased from one to a mere handful of possible speakers. Is the limitation imposed by uncertainty, i.e., the number of possible individuals, a general characteristic of processes for person individuation such that the identifiability of a familiar face would undergo a similar decline with uncertainty? Specifically, could the presence of an unnamed celebrity, thus any celebrity, be detected when presented in a rapid sequence of unfamiliar faces? If so, could the celebrity be identified? Despite the markedly greater physical similarity of faces compared to objects that are, say, not tools, the presence of a celebrity could be detected with moderately high accuracy (∼75%) at rates exceeding 7 faces/s. False alarms were exceedingly rare as almost all the errors were misses. Detection accuracy by moderate congenital prosopagnosics was lower than controls, but still well above chance. Given the detection of the presence of a celebrity, all subjects were almost always able to identify that celebrity, providing no role for a covert familiarity signal outside of awareness. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Fuzzy logic and optical correlation-based face recognition method for patient monitoring application in home video surveillance

    NASA Astrophysics Data System (ADS)

    Elbouz, Marwa; Alfalou, Ayman; Brosseau, Christian

    2011-06-01

    Home automation is being implemented into more and more domiciles of the elderly and disabled in order to maintain their independence and safety. For that purpose, we propose and validate a surveillance video system, which detects various posture-based events. One of the novel points of this system is to use adapted Vander-Lugt correlator (VLC) and joint-transfer correlator (JTC) techniques to make decisions on the identity of a patient and his three-dimensional (3-D) positions in order to overcome the problem of crowd environment. We propose a fuzzy logic technique to get decisions on the subject's behavior. Our system is focused on the goals of accuracy, convenience, and cost, which in addition does not require any devices attached to the subject. The system permits one to study and model subject responses to behavioral change intervention because several levels of alarm can be incorporated according different situations considered. Our algorithm performs a fast 3-D recovery of the subject's head position by locating eyes within the face image and involves a model-based prediction and optical correlation techniques to guide the tracking procedure. The object detection is based on (hue, saturation, value) color space. The system also involves an adapted fuzzy logic control algorithm to make a decision based on information given to the system. Furthermore, the principles described here are applicable to a very wide range of situations and robust enough to be implementable in ongoing experiments.

  20. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  1. An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring

    PubMed Central

    Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros

    2017-01-01

    Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers’ behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone. PMID:29165331

  2. Learning to Detect Vandalism in Social Content Systems: A Study on Wikipedia

    NASA Astrophysics Data System (ADS)

    Javanmardi, Sara; McDonald, David W.; Caruana, Rich; Forouzan, Sholeh; Lopes, Cristina V.

    A challenge facing user generated content systems is vandalism, i.e. edits that damage content quality. The high visibility and easy access to social networks makes them popular targets for vandals. Detecting and removing vandalism is critical for these user generated content systems. Because vandalism can take many forms, there are many different kinds of features that are potentially useful for detecting it. The complex nature of vandalism, and the large number of potential features, make vandalism detection difficult and time consuming for human editors. Machine learning techniques hold promise for developing accurate, tunable, and maintainable models that can be incorporated into vandalism detection tools. We describe a method for training classifiers for vandalism detection that yields classifiers that are more accurate on the PAN 2010 corpus than others previously developed. Because of the high turnaround in social network systems, it is important for vandalism detection tools to run in real-time. To this aim, we use feature selection to find the minimal set of features consistent with high accuracy. In addition, because some features are more costly to compute than others, we use cost-sensitive feature selection to reduce the total computational cost of executing our models. In addition to the features previously used for spam detection, we introduce new features based on user action histories. The user history features contribute significantly to classifier performance. The approach we use is general and can easily be applied to other user generated content systems.

  3. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  4. Brain Signals of Face Processing as Revealed by Event-Related Potentials

    PubMed Central

    Olivares, Ela I.; Iglesias, Jaime; Saavedra, Cristina; Trujillo-Barreto, Nelson J.; Valdés-Sosa, Mitchell

    2015-01-01

    We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing. PMID:26160999

  5. Detecting gear tooth fracture in a high contact ratio face gear mesh

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.; Handschuh, Robert F.; Lewicki, David G.; Decker, Harry J.

    1995-01-01

    This paper summarized the results of a study in which three different vibration diagnostic methods were used to detect gear tooth fracture in a high contact ratio face gear mesh. The NASA spiral bevel gear fatigue test rig was used to produce unseeded fault, natural failures of four face gear specimens. During the fatigue tests, which were run to determine load capacity and primary failure mechanisms for face gears, vibration signals were monitored and recorded for gear diagnostic purposes. Gear tooth bending fatigue and surface pitting were the primary failure modes found in the tests. The damage ranged from partial tooth fracture on a single tooth in one test to heavy wear, severe pitting, and complete tooth fracture of several teeth on another test. Three gear fault detection techniques, FM4, NA4*, and NB4, were applied to the experimental data. These methods use the signal average in both the time and frequency domain. Method NA4* was able to conclusively detect the gear tooth fractures in three out of the four fatigue tests, along with gear tooth surface pitting and heavy wear. For multiple tooth fractures, all of the methods gave a clear indication of the damage. It was also found that due to the high contact ratio of the face gear mesh, single tooth fractures did not significantly affect the vibration signal, making this type of failure difficult to detect.

  6. Effects of Wearing NBC (Nuclear, Biological and Chemical) Protective Clothing in the Heat on Detection of Visual Signals

    DTIC Science & Technology

    1985-02-01

    agents, as well as nuclear weaponry. In the face of’ such threats, the United States Army has developed equinment and clothing systems designed to...AD_ REPORT NO. T7185 EFFECTS OF WEARING NBC PROTECTIVE CLOTHING IN THE HEAT ON DETECTION OF VISUAL SIGNALS U S ARMY RESEARCH INSTITUTE N OF...CATALOG NUMBER T7/•5 ( 4. TITLE (and Subtitle) 5. TYPE OF REPORT & PERIOD COVERED Effects of Wearing NBC Protective Clothing in the Technical Report Heat

  7. Atypical face shape and genomic structural variants in epilepsy

    PubMed Central

    Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter

    2012-01-01

    Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390

  8. Facelock: familiarity-based graphical authentication.

    PubMed

    Jenkins, Rob; McLachlan, Jane L; Renaud, Karen

    2014-01-01

    Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised 'facelock', in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems.

  9. A novel CUSUM-based approach for event detection in smart metering

    NASA Astrophysics Data System (ADS)

    Zhu, Zhicheng; Zhang, Shuai; Wei, Zhiqiang; Yin, Bo; Huang, Xianqing

    2018-03-01

    Non-intrusive load monitoring (NILM) plays such a significant role in raising consumer awareness on household electricity use to reduce overall energy consumption in the society. With regard to monitoring low power load, many researchers have introduced CUSUM into the NILM system, since the traditional event detection method is not as effective as expected. Due to the fact that the original CUSUM faces limitations given the small shift is below threshold, we therefore improve the test statistic which allows permissible deviation to gradually rise as the data size increases. This paper proposes a novel event detection and corresponding criterion that could be used in NILM systems to recognize transient states and to help the labelling task. Its performance has been tested in a real scenario where eight different appliances are connected to main line of electric power.

  10. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2013-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…

  11. Intersensory Redundancy Hinders Face Discrimination in Preschool Children: Evidence for Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel

    2014-01-01

    Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…

  12. Arrester Resistive Current Measuring System Based on Heterogeneous Network

    NASA Astrophysics Data System (ADS)

    Zhang, Yun Hua; Li, Zai Lin; Yuan, Feng; Hou Pan, Feng; Guo, Zhan Nan; Han, Yue

    2018-03-01

    Metal Oxide Arrester (MOA) suffers from aging and poor insulation due to long-term impulse voltage and environmental impact, and the value and variation tendency of resistive current can reflect the health conditions of MOA. The common wired MOA detection need to use long cables, which is complicated to operate, and that wireless measurement methods are facing the problems of poor data synchronization and instability. Therefore a novel synchronous measurement system of arrester current resistive based on heterogeneous network is proposed, which simplifies the calculation process and improves synchronization, accuracy and stability and of the measuring system. This system combines LoRa wireless network, high speed wireless personal area network and the process layer communication, and realizes the detection of arrester working condition. Field test data shows that the system has the characteristics of high accuracy, strong anti-interference ability and good synchronization, which plays an important role in ensuring the stable operation of the power grid.

  13. SENSITIVITY AND SPECIFICITY OF DETECTING POLYPOIDAL CHOROIDAL VASCULOPATHY WITH EN FACE OPTICAL COHERENCE TOMOGRAPHY AND OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY.

    PubMed

    de Carlo, Talisa E; Kokame, Gregg T; Kaneko, Kyle N; Lian, Rebecca; Lai, James C; Wee, Raymond

    2018-03-20

    Determine sensitivity and specificity of polypoidal choroidal vasculopathy (PCV) diagnosis with structural en face optical coherence tomography (OCT) and OCT angiography (OCTA). Retrospective review of the medical records of eyes diagnosed with PCV by indocyanine green angiography with review of diagnostic testing with structural en face OCT and OCTA by a trained reader. Structural en face OCT, cross-sectional OCT angiograms alone, and OCTA in its entirety were reviewed blinded to the findings of indocyanine green angiography and each other to determine if they could demonstrate the PCV complex. Sensitivity and specificity of PCV diagnosis was determined for each imaging technique using indocyanine green angiography as the ground truth. Sensitivity and specificity of structural en face OCT were 30.0% and 85.7%, of OCT angiograms alone were 26.8% and 96.8%, and of the entire OCTA were 43.9% and 87.1%, respectively. Sensitivity and specificity were improved for OCT angiograms and OCTA when looking at images taken within 1 month of PCV diagnosis. Sensitivity of detecting PCV was low using structural en face OCT and OCTA but specificity was high. Indocyanine green angiography remains the gold standard for PCV detection.

  14. Long-Term Exposure to American and European Movies and Television Series Facilitates Caucasian Face Perception in Young Chinese Watchers.

    PubMed

    Wang, Yamin; Zhou, Lu

    2016-10-01

    Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.

  15. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  16. Neural Mechanism for Mirrored Self-face Recognition.

    PubMed

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.

  17. Neural Mechanism for Mirrored Self-face Recognition

    PubMed Central

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-01-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712

  18. Info-gap theory and robust design of surveillance for invasive species: the case study of Barrow Island.

    PubMed

    Davidovitch, Lior; Stoklosa, Richard; Majer, Jonathan; Nietrzeba, Alex; Whittle, Peter; Mengersen, Kerrie; Ben-Haim, Yakov

    2009-06-01

    Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.

  19. Vision-based in-line fabric defect detection using yarn-specific shape features

    NASA Astrophysics Data System (ADS)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  20. Automated Detection of Actinic Keratoses in Clinical Photographs

    PubMed Central

    Hames, Samuel C.; Sinnya, Sudipta; Tan, Jean-Marie; Morze, Conrad; Sahebian, Azadeh; Soyer, H. Peter; Prow, Tarl W.

    2015-01-01

    Background Clinical diagnosis of actinic keratosis is known to have intra- and inter-observer variability, and there is currently no non-invasive and objective measure to diagnose these lesions. Objective The aim of this pilot study was to determine if automatically detecting and circumscribing actinic keratoses in clinical photographs is feasible. Methods Photographs of the face and dorsal forearms were acquired in 20 volunteers from two groups: the first with at least on actinic keratosis present on the face and each arm, the second with no actinic keratoses. The photographs were automatically analysed using colour space transforms and morphological features to detect erythema. The automated output was compared with a senior consultant dermatologist’s assessment of the photographs, including the intra-observer variability. Performance was assessed by the correlation between total lesions detected by automated method and dermatologist, and whether the individual lesions detected were in the same location as the dermatologist identified lesions. Additionally, the ability to limit false positives was assessed by automatic assessment of the photographs from the no actinic keratosis group in comparison to the high actinic keratosis group. Results The correlation between the automatic and dermatologist counts was 0.62 on the face and 0.51 on the arms, compared to the dermatologist’s intra-observer variation of 0.83 and 0.93 for the same. Sensitivity of automatic detection was 39.5% on the face, 53.1% on the arms. Positive predictive values were 13.9% on the face and 39.8% on the arms. Significantly more lesions (p<0.0001) were detected in the high actinic keratosis group compared to the no actinic keratosis group. Conclusions The proposed method was inferior to assessment by the dermatologist in terms of sensitivity and positive predictive value. However, this pilot study used only a single simple feature and was still able to achieve sensitivity of detection of 53.1% on the arms.This suggests that image analysis is a feasible avenue of investigation for overcoming variability in clinical assessment. Future studies should focus on more sophisticated features to improve sensitivity for actinic keratoses without erythema and limit false positives associated with the anatomical structures on the face. PMID:25615930

  1. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  2. Increasing the power for detecting impairment in older adults with the Faces subtest from Wechsler Memory Scale-III: an empirical trial.

    PubMed

    Levy, Boaz

    2006-10-01

    Empirical studies have questioned the validity of the Faces subtest from the WMS-III for detecting impairment in visual memory, particularly among the elderly. A recent examination of the test norms revealed a significant age related floor effect already emerging on Faces I (immediate recall), implying excessive difficulty in the acquisition phase among unimpaired older adults. The current study compared the concurrent validity of the Faces subtest with an alternative measure between 16 Alzheimer's patients and 16 controls. The alternative measure was designed to facilitate acquisition by reducing the sequence of item presentation. Other changes aimed at increasing the retrieval challenge, decreasing error due to guessing and standardizing the administration. Analyses converged to indicate that the alternative measure provided a considerably greater differentiation than the Faces subtest between Alzheimer's patients and controls. Steps for revising the Faces subtest are discussed.

  3. Tracking the truth: the effect of face familiarity on eye fixations during deception.

    PubMed

    Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert

    2017-05-01

    In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.

  4. Eye/head tracking technology to improve HCI with iPad applications.

    PubMed

    Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña

    2015-01-22

    In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future.

  5. Eye/Head Tracking Technology to Improve HCI with iPad Applications

    PubMed Central

    Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña

    2015-01-01

    In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. PMID:25621603

  6. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. High-Performance Visible-Blind UV Phototransistors Based on n-Type Naphthalene Diimide Nanomaterials.

    PubMed

    Song, Inho; Lee, Seung-Chul; Shang, Xiaobo; Ahn, Jaeyong; Jung, Hoon-Joo; Jeong, Chan-Uk; Kim, Sang-Wook; Yoon, Woojin; Yun, Hoseop; Kwon, O-Pil; Oh, Joon Hak

    2018-04-11

    This study investigates the performance of single-crystalline nanomaterials of wide-band gap naphthalene diimide (NDI) derivatives with methylene-bridged aromatic side chains. Such materials are found to be easily used as high-performance, visible-blind near-UV light detectors. NDI single-crystalline nanoribbons are assembled using a simple solution-based process (without solvent-inclusion problems), which is then applied to organic phototransistors (OPTs). Such OPTs exhibit excellent n-channel transistor characteristics, including an average electron mobility of 1.7 cm 2 V -1 s -1 , sensitive UV detection properties with a detection limit of ∼1 μW cm -2 , millisecond-level responses, and detectivity as high as 10 15 Jones, demonstrating the highly sensitive organic visible-blind UV detectors. The high performance of our OPTs originates from the large face-to-face π-π stacking area between the NDI semiconducting cores, which is facilitated by methylene-bridged aromatic side chains. Interestingly, NDI-based nanoribbon OPTs exhibit a distinct visible-blind near-UV detection with an identical detection limit, even under intense visible light illumination (for example, 10 4 times higher intensity than UV light intensity). Our findings demonstrate that wide-band gap NDI-based nanomaterials are highly promising for developing high-performance visible-blind UV photodetectors. Such photodetectors could potentially be used for various applications including environmental and health-monitoring systems.

  8. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  9. Sun sensing guidance system for high altitude aircraft

    NASA Technical Reports Server (NTRS)

    Reed, R. D. (Principal Investigator)

    1982-01-01

    A sun sensing guidance system for high altitude aircraft is described. The system is characterized by a disk shaped body mounted for rotation aboard the aircraft in exposed relation to solar radiation. The system also has a plurality of mutually isolated chambers; each chamber being characterized by an opening having a photosensor disposed therein and arranged in facing relation with the opening for receiving incident solar radiation and responsively providing a voltage output. Photosensors are connected in paired relation through a bridge circuit for providing heading error signals in response to detected imbalances in intensities of solar radiation.

  10. Optical Tracker For Longwall Coal Shearer

    NASA Technical Reports Server (NTRS)

    Poulsen, Peter D.; Stein, Richard J.; Pease, Robert E.

    1989-01-01

    Photographic record yields information for correction of vehicle path. Tracking system records lateral movements of longwall coal-shearing vehicle. System detects lateral and vertical deviations of path of vehicle moving along coal face, shearing coal as it goes. Rides on rails in mine tunnel, advancing on toothed track in one of rails. As vehicle moves, retroreflective mirror rides up and down on teeth, providing series of pulsed reflections to film recorder. Recorded positions of pulses, having horizontal and vertical orientations, indicate vertical and horizontal deviations, respectively, of vehicle.

  11. Peer review.

    PubMed

    Twaij, H; Oussedik, S; Hoffmeyer, P

    2014-04-01

    The maintenance of quality and integrity in clinical and basic science research depends upon peer review. This process has stood the test of time and has evolved to meet increasing work loads, and ways of detecting fraud in the scientific community. However, in the 21st century, the emphasis on evidence-based medicine and good science has placed pressure on the ways in which the peer review system is used by most journals. This paper reviews the peer review system and the problems it faces in the digital age, and proposes possible solutions.

  12. Determination of the Ecological and Geographic Distributions of Armillaria Species in Missouri Ozark Forest Ecosystems

    Treesearch

    Johann N. Bruhn; James J. Wetteroff; Jeanne D. Mihail; Susan Burks

    1997-01-01

    Armillaria root rot contributes to oak decline in the Ozarks. Three Armillaria species were detected in Ecological Landtypes (ELT's) representing south- to west-facing side slopes (ELT 17), north- to east-facing side slopes (ELT 18), and ridge tops (ELT 11). Armillaria mellea was detected in 91 percent...

  13. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  14. Challenges and opportunities in clinical translation of biomedical optical spectroscopy and imaging

    NASA Astrophysics Data System (ADS)

    Wilson, Brian C.; Jermyn, Michael; Leblond, Frederic

    2018-03-01

    Medical devices face many hurdles before they enter routine clinical practice to address unmet clinical needs. This is also the case for biomedical optical spectroscopy and imaging systems that are used here to illustrate the opportunities and challenges involved. Following initial concept, stages in clinical translation include instrument development, preclinical testing, clinical prototyping, clinical trials, prototype-to-product conversion, regulatory approval, commercialization, and finally clinical adoption and dissemination, all in the face of potentially competing technologies. Optical technologies face additional challenges from their being extremely diverse, often targeting entirely different diseases and having orders-of-magnitude differences in resolution and tissue penetration. However, these technologies can potentially address a wide variety of unmet clinical needs since they provide rich intrinsic biochemical and structural information, have high sensitivity and specificity for disease detection and localization, and are practical, safe (minimally invasive, nonionizing), and relatively affordable.

  15. Assessing facial attractiveness: individual decisions and evolutionary constraints

    PubMed Central

    Kocsor, Ferenc; Feldmann, Adam; Bereczkei, Tamas; Kállai, János

    2013-01-01

    Background Several studies showed that facial attractiveness, as a highly salient social cue, influences behavioral responses. It has also been found that attractive faces evoke distinctive neural activation compared to unattractive or neutral faces. Objectives Our aim was to design a face recognition task where individual preferences for facial cues are controlled for, and to create conditions that are more similar to natural circumstances in terms of decision making. Design In an event-related functional magnetic resonance imaging (fMRI) experiment, subjects were shown attractive and unattractive faces, categorized on the basis of their own individual ratings. Results Statistical analysis of all subjects showed elevated brain activation for attractive opposite-sex faces in contrast to less attractive ones in regions that previously have been reported to show enhanced activation with increasing attractiveness level (e.g. the medial and superior occipital gyri, fusiform gyrus, precentral gyrus, and anterior cingular cortex). Besides these, females showed additional brain activation in areas thought to be involved in basic emotions and desires (insula), detection of facial emotions (superior temporal gyrus), and memory retrieval (hippocampus). Conclusions From these data, we speculate that because of the risks involving mate choice faced by women during evolutionary times, selection might have preferred the development of an elaborated neural system in females to assess the attractiveness and social value of male faces. PMID:24693356

  16. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    PubMed

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  17. Detecting Negative Obstacles by Use of Radar

    NASA Technical Reports Server (NTRS)

    Mittskus, Anthony; Lux, James

    2006-01-01

    Robotic land vehicles would be equipped with small radar systems to detect negative obstacles, according to a proposal. The term "negative obstacles" denotes holes, ditches, and any other terrain features characterized by abrupt steep downslopes that could be hazardous for vehicles. Video cameras and other optically based obstacle-avoidance sensors now installed on some robotic vehicles cannot detect obstacles under adverse lighting conditions. Even under favorable lighting conditions, they cannot detect negative obstacles. A radar system according to the proposal would be of the frequency-modulation/ continuous-wave (FM/CW) type. It would be installed on a vehicle, facing forward, possibly with a downward slant of the main lobe(s) of the radar beam(s) (see figure). It would utilize one or more wavelength(s) of the order of centimeters. Because such wavelengths are comparable to the characteristic dimensions of terrain features associated with negative hazards, a significant amount of diffraction would occur at such features. In effect, the diffraction would afford a limited ability to see corners and to see around corners. Hence, the system might utilize diffraction to detect corners associated with negative obstacles. At the time of reporting the information for this article, preliminary analyses of diffraction at simple negative obstacles had been performed, but an explicit description of how the system would utilize diffraction was not available.

  18. Detection of foreign body using fast thermoacoustic tomography with a multielement linear transducer array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie Liming; Xing Da; Yang Diwu

    2007-04-23

    Current imaging modalities face challenges in clinical applications due to limitations in resolution or contrast. Microwave-induced thermoacoustic imaging may provide a complementary modality for medical imaging, particularly for detecting foreign objects due to their different absorption of electromagnetic radiation at specific frequencies. A thermoacoustic tomography system with a multielement linear transducer array was developed and used to detect foreign objects in tissue. Radiography and thermoacoustic images of objects with different electromagnetic properties, including glass, sand, and iron, were compared. The authors' results demonstrate that thermoacoustic imaging has the potential to become a fast method for surgical localization of occult foreignmore » objects.« less

  19. Ground radar detection of meteoroids in space

    NASA Technical Reports Server (NTRS)

    Kessler, D. J.; Landry, P. M.; Gabbard, J. R.; Moran, J. L. T.

    1980-01-01

    A special test to lower the detection threshold for satellite fragments potentially dangerous to spacecraft was carried out by NORAD for NASA, using modified radar software. The Perimeter Acquisition Radar Attack Characterization System, a large, planar face, phased radar, operates at a nominal 430 MHz and produces 120 pulses per second, 45 of which were dedicated to search. In a time period of 8.4 hours of observations over three days, over 6000 objects were detected and tracked of which 37 were determined to have velocities greater than escape velocity. Six of these were larger objects with radar cross sections greater than 0.1 sq m and were probably orbiting satellites. A table gives the flux of both observed groups.

  20. Hemispheric metacontrol and cerebral dominance in healthy individuals investigated by means of chimeric faces.

    PubMed

    Urgesi, Cosimo; Bricolo, Emanuela; Aglioti, Salvatore M

    2005-08-01

    Cerebral dominance and hemispheric metacontrol were investigated by testing the ability of healthy participants to match chimeric, entire, or half faces presented tachistoscopically. The two hemi-faces compounding chimeric or entire stimuli were presented simultaneously or asynchronously at different exposure times. Participants did not consciously detect chimeric faces for simultaneous presentations lasting up to 40 ms. Interestingly, a 20 ms separation between each half-chimera was sufficient to induce detection of conflicts at a conscious level. Although the presence of chimeric faces was not consciously perceived, performance on chimeric faces was poorer than on entire- and half-faces stimuli, thus indicating an implicit processing of perceptual conflicts. Moreover, the precedence of hemispheric stimulation over-ruled the right hemisphere dominance for face processing, insofar as the hemisphere stimulated last appeared to influence the response. This dynamic reversal of cerebral dominance, however, was not caused by a shift in hemispheric specialization, since the level of performance always reflected the right hemisphere specialization for face recognition. Thus, the dissociation between hemispheric dominance and specialization found in the present study hints at the existence of hemispheric metacontrol in healthy individuals.

  1. A novel BCI based on ERP components sensitive to configural processing of human faces

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  2. A novel BCI based on ERP components sensitive to configural processing of human faces.

    PubMed

    Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  3. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  4. Knock detection system to improve petrol engine performance, using microphone sensor

    NASA Astrophysics Data System (ADS)

    Sujono, Agus; Santoso, Budi; Juwana, Wibawa Endra

    2017-01-01

    An increase of power and efficiency of spark ignition engines (petrol engines) are always faced with the problem of knock. Even the characteristics of the engine itself are always determined from the occurrence of knock. Until today, this knocking problem has not been solved completely. Knock is caused by principal factors that are influenced by the engine rotation, the load or opening the throttle and spark advance (ignition timing). In this research, the engine is mounted on the engine test bed (ETB) which is equipped with the necessary sensors. Knock detection using a new method, which is based on pattern recognition, which through the knock sound detection by using a microphone sensor, active filter, the regression of the normalized envelope function, and the calculation of the Euclidean distance is used for identifying knock. This system is implemented with a microcontroller which uses fuzzy logic controller ignition (FLIC), which aims to set proper spark advance, in accordance with operating conditions. This system can improve the engine performance for approximately 15%.

  5. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  6. Neuromagnetic evidence that the right fusiform face area is essential for human face awareness: An intermittent binocular rivalry study.

    PubMed

    Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo

    2016-08-01

    When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (human

  7. Reported maternal tendencies predict the reward value of infant facial cuteness, but not cuteness detection

    PubMed Central

    Hahn, Amanda C.; DeBruine, Lisa M.; Jones, Benedict C.

    2015-01-01

    The factors that contribute to individual differences in the reward value of cute infant facial characteristics are poorly understood. Here we show that the effect of cuteness on a behavioural measure of the reward value of infant faces is greater among women reporting strong maternal tendencies. By contrast, maternal tendencies did not predict women's subjective ratings of the cuteness of these infant faces. These results show, for the first time, that the reward value of infant facial cuteness is greater among women who report being more interested in interacting with infants, implicating maternal tendencies in individual differences in the reward value of infant cuteness. Moreover, our results indicate that the relationship between maternal tendencies and the reward value of infant facial cuteness is not due to individual differences in women's ability to detect infant cuteness. This latter result suggests that individual differences in the reward value of infant cuteness are not simply a by-product of low-cost, functionless biases in the visual system. PMID:25740842

  8. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  9. On the Comparison of Wearable Sensor Data Fusion to a Single Sensor Machine Learning Technique in Fall Detection.

    PubMed

    Tsinganos, Panagiotis; Skodras, Athanassios

    2018-02-14

    In the context of the ageing global population, researchers and scientists have tried to find solutions to many challenges faced by older people. Falls, the leading cause of injury among elderly, are usually severe enough to require immediate medical attention; thus, their detection is of primary importance. To this effect, many fall detection systems that utilize wearable and ambient sensors have been proposed. In this study, we compare three newly proposed data fusion schemes that have been applied in human activity recognition and fall detection. Furthermore, these algorithms are compared to our recent work regarding fall detection in which only one type of sensor is used. The results show that fusion algorithms differ in their performance, whereas a machine learning strategy should be preferred. In conclusion, the methods presented and the comparison of their performance provide useful insights into the problem of fall detection.

  10. Less is more? Detecting lies in veiled witnesses.

    PubMed

    Leach, Amy-May; Ammar, Nawal; England, D Nicole; Remigio, Laura M; Kleinberg, Bennett; Verschuere, Bruno J

    2016-08-01

    Judges in the United States, the United Kingdom, and Canada have ruled that witnesses may not wear the niqab-a type of face veil-when testifying, in part because they believed that it was necessary to see a person's face to detect deception (Muhammad v. Enterprise Rent-A-Car, 2006; R. v. N. S., 2010; The Queen v. D(R), 2013). In two studies, we used conventional research methods and safeguards to empirically examine the assumption that niqabs interfere with lie detection. Female witnesses were randomly assigned to lie or tell the truth while remaining unveiled or while wearing a hijab (i.e., a head veil) or a niqab (i.e., a face veil). In Study 1, laypersons in Canada (N = 232) were more accurate at detecting deception in witnesses who wore niqabs or hijabs than in those who did not wear veils. Concealing portions of witnesses' faces led laypersons to change their decision-making strategies without eliciting negative biases. Lie detection results were partially replicated in Study 2, with laypersons in Canada, the United Kingdom, and the Netherlands (N = 291): observers' performance was better when witnesses wore either niqabs or hijabs than when witnesses did not wear veils. These findings suggest that, contrary to judicial opinion, niqabs do not interfere with-and may, in fact, improve-the ability to detect deception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Performances of Machine Learning Algorithms for Binary Classification of Network Anomaly Detection System

    NASA Astrophysics Data System (ADS)

    Nawir, Mukrimah; Amir, Amiza; Lynn, Ong Bi; Yaakob, Naimah; Badlishah Ahmad, R.

    2018-05-01

    The rapid growth of technologies might endanger them to various network attacks due to the nature of data which are frequently exchange their data through Internet and large-scale data that need to be handle. Moreover, network anomaly detection using machine learning faced difficulty when dealing the involvement of dataset where the number of labelled network dataset is very few in public and this caused many researchers keep used the most commonly network dataset (KDDCup99) which is not relevant to employ the machine learning (ML) algorithms for a classification. Several issues regarding these available labelled network datasets are discussed in this paper. The aim of this paper to build a network anomaly detection system using machine learning algorithms that are efficient, effective and fast processing. The finding showed that AODE algorithm is performed well in term of accuracy and processing time for binary classification towards UNSW-NB15 dataset.

  12. Detecting "Infant-Directedness" in Face and Voice

    ERIC Educational Resources Information Center

    Kim, Hojin I.; Johnson, Scott P.

    2014-01-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants…

  13. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  14. Facelock: familiarity-based graphical authentication

    PubMed Central

    McLachlan, Jane L.; Renaud, Karen

    2014-01-01

    Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised ‘facelock’, in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems. PMID:25024913

  15. Whole-face procedures for recovering facial images from memory.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B

    2013-06-01

    Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Detecting Emotional Expression in Face-to-Face and Online Breast Cancer Support Groups

    ERIC Educational Resources Information Center

    Liess, Anna; Simon, Wendy; Yutsis, Maya; Owen, Jason E.; Piemme, Karen Altree; Golant, Mitch; Giese-Davis, Janine

    2008-01-01

    Accurately detecting emotional expression in women with primary breast cancer participating in support groups may be important for therapists and researchers. In 2 small studies (N = 20 and N = 16), the authors examined whether video coding, human text coding, and automated text analysis provided consistent estimates of the level of emotional…

  17. Rigid particulate matter sensor

    DOEpatents

    Hall, Matthew [Austin, TX

    2011-02-22

    A sensor to detect particulate matter. The sensor includes a first rigid tube, a second rigid tube, a detection surface electrode, and a bias surface electrode. The second rigid tube is mounted substantially parallel to the first rigid tube. The detection surface electrode is disposed on an outer surface of the first rigid tube. The detection surface electrode is disposed to face the second rigid tube. The bias surface electrode is disposed on an outer surface of the second rigid tube. The bias surface electrode is disposed to face the detection surface electrode on the first rigid tube. An air gap exists between the detection surface electrode and the bias surface electrode to allow particulate matter within an exhaust stream to flow between the detection and bias surface electrodes.

  18. Early detection of tooth wear by en-face optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Mărcăuteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Eniko; Hughes, Mike; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.

    2009-02-01

    Excessive dental wear (pathological attrition and/or abfractions) is a frequent complication in bruxing patients. The parafunction causes heavy occlusal loads. The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-face optical coherence tomography was used for investigating and imaging of several extracted tooth, with a normal morphology, derived from patients with active bruxism and from subjects without parafunction. We found a characteristic pattern of enamel cracks in patients with first degree bruxism and with a normal tooth morphology. We conclude that the en-face optical coherence tomography is a promising non-invasive alternative technique for the early detection of occlusal overload, before it becomes clinically evident as tooth wear.

  19. 36 CFR 1234.32 - What does an agency have to do to certify a fire-safety detection and suppression system?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... storage equipment used, or how the space is designed, controlled, and operated) and the characteristics of... inches long and sealed in a plastic bag and that the fire is started in an aisle at the face of a carton at floor level. Assumptions must be noted in the report; (ii) Details the characteristics of the...

  20. Implicit Binding of Facial Features During Change Blindness

    PubMed Central

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  1. Implicit binding of facial features during change blindness.

    PubMed

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  2. Method and system for sensing and identifying foreign particles in a gaseous environment

    NASA Technical Reports Server (NTRS)

    Choi, Sang H. (Inventor); Park, Yeonjoon (Inventor)

    2008-01-01

    An optical method and system sense and identify a foreign particle in a gaseous environment. A light source generates light. An electrically-conductive sheet has an array of holes formed through the sheet. Each hole has a diameter that is less than one quarter of the light's wavelength. The sheet is positioned relative to the light source such that the light is incident on one face of the sheet. An optical detector is positioned adjacent the sheet's opposing face and is spaced apart therefrom such that a gaseous environment is adapted to be disposed there between. Alterations in the light pattern detected by the optical detector indicate the presence of a foreign particle in the holes or on the sheet, while a laser induced fluorescence (LIF) signature associated with the foreign particle indicates the identity of the foreign particle.

  3. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  4. From tiger to panda: animal head detection.

    PubMed

    Zhang, Weiwei; Sun, Jian; Tang, Xiaoou

    2011-06-01

    Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category--animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14,379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering.

  5. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  6. Optical coherence tomography used for internal biometrics

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Sherif, Sherif; Mao, Youxin; Flueraru, Costel

    2007-06-01

    Traditional biometric technologies used for security and person identification essentially deal with fingerprints, hand geometry and face images. However, because all these technologies use external features of human body, they can be easily fooled and tampered with by distorting, modifying or counterfeiting these features. Nowadays, internal biometrics which detects the internal ID features of an object is becoming increasingly important. Being capable of exploring under-skin structure, optical coherence tomography (OCT) system can be used as a powerful tool for internal biometrics. We have applied fiber-optic and full-field OCT systems to detect the multiple-layer 2D images and 3D profile of the fingerprints, which eventually result in a higher discrimination than the traditional 2D recognition methods. More importantly, the OCT based fingerprint recognition has the ability to easily distinguish artificial fingerprint dummies by analyzing the extracted layered surfaces. Experiments show that our OCT systems successfully detected the dummy, which was made of plasticene and was used to bypass the commercially available fingerprint scanning system with a false accept rate (FAR) of 100%.

  7. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    PubMed Central

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  8. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  9. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  10. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  11. Validity, Sensitivity, and Responsiveness of the 11-Face Faces Pain Scale to Postoperative Pain in Adult Orthopedic Surgery Patients.

    PubMed

    Van Giang, Nguyen; Chiu, Hsiao-Yean; Thai, Duong Hong; Kuo, Shu-Yu; Tsai, Pei-Shan

    2015-10-01

    Pain is common in patients after orthopedic surgery. The 11-face Faces Pain Scale has not been validated for use in adult patients with postoperative pain. To assess the validity of the 11-face Faces Pain Scale and its ability to detect responses to pain medications, and to determine whether the sensitivity of the 11-face Faces Pain Scale for detecting changes in pain intensity over time is associated with gender differences in adult postorthopedic surgery patients. The 11-face Faces Pain Scale was translated into Vietnamese using forward and back translation. Postoperative pain was assessed using an 11-point numerical rating scale and the 11-face Faces Pain Scale on the day of surgery, and before (Time 1) and every 30 minutes after (Times 2-5) the patients had taken pain medications on the first postoperative day. The 11-face Faces Pain Scale highly correlated with the numerical rating scale (r = 0.78, p < .001). When the scores from each follow-up test (Times 2-5) were compared with those from the baseline test (Time 1), the effect sizes were -0.70, -1.05, -1.20, and -1.31, and the standardized response means were -1.17, -1.59, -1.66, and -1.82, respectively. The mean change in pain intensity, but not gender-time interaction effect, over the five time points was significant (F = 182.03, p < .001). Our results support that the 11-face Faces Pain Scale is appropriate for measuring acute postoperative pain in adults. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  12. A pilot trial of tele-ophthalmology for diagnosis of chronic blurred vision.

    PubMed

    Tan, Johnson Choon Hwai; Poh, Eugenie Wei Ting; Srinivasan, Sanjay; Lim, Tock Han

    2013-02-01

    We evaluated the accuracy of tele-ophthalmology in diagnosing the major causes of chronic blurring of vision. Thirty consecutive patients attending a primary eye-care facility in Singapore (the Ang Mo Kio Polyclinic, AMKP) with the symptom of chronic blurred vision were recruited. An ophthalmic technician was trained to perform Snellen acuity; auto-refraction; intraocular pressure measurement; red-colour perimetry; video recordings of extraocular movement, cover tests and pupillary reactions; and anterior segment and fundus photography. Digital information was transmitted to a tertiary hospital in Singapore (the Tan Tock Seng Hospital) via a tele-ophthalmology system for teleconsultation with an ophthalmologist. The diagnoses were compared with face-to-face consultation by another ophthalmologist at the AMKP. A user experience questionnaire was administered at the end of the consultation. Using face-to-face consultation as the gold standard, tele-ophthalmology achieved 100% sensitivity and specificity in diagnosing media opacity (n = 29), maculopathy (n = 23) and keratopathy (n = 30) of any type; and 100% sensitivity and 92% specificity in diagnosing optic neuropathy of any type (n = 24). The majority of the patients (97%) were satisfied with the tele-ophthalmology workflow and consultation. The tele-ophthalmology system was able to detect causes of chronic blurred vision accurately. It has the potential to deliver high-accuracy diagnostic eye support to remote areas if suitably trained ophthalmic technicians are available.

  13. Computer-aided diagnosis workstation and telemedicine network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2009-02-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and "Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  14. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  15. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  16. Detecting and Categorizing Fleeting Emotions in Faces

    PubMed Central

    Sweeny, Timothy D.; Suzuki, Satoru; Grabowecky, Marcia; Paller, Ken A.

    2013-01-01

    Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms. PMID:22866885

  17. Retrospective Evaluation of a Teleretinal Screening Program in Detecting Multiple Nondiabetic Eye Diseases.

    PubMed

    Maa, April Y; Patel, Shivangi; Chasan, Joel E; Delaune, William; Lynch, Mary G

    2017-01-01

    Diabetic teleretinal screening programs have been utilized successfully across the world to detect diabetic retinopathy (DR) and are well validated. Less information, however, exists on the ability of teleretinal imaging to detect nondiabetic ocular pathology. This study performed a retrospective evaluation to assess the ability of a community-based diabetic teleretinal screening program to detect common ocular disease other than DR. A retrospective chart review of 1,774 patients who underwent diabetic teleretinal screening was performed. Eye clinic notes from the Veterans Health Administration's electronic medical record, Computerized Patient Record System, were searched for each of the patients screened through teleretinal imaging. When a face-to-face examination note was present, the physical findings were compared to those obtained through teleretinal imaging. Sensitivity, specificity, and positive and negative predictive values were calculated for suspicious nerve, cataract, and age-related macular degeneration. A total of 903 patients underwent a clinical examination. The positive predictive value was highest for cataract (100%), suspicious nerve (93%), and macular degeneration (90%). The negative predictive value and the percent agreement between teleretinal imaging and a clinical examination were over 90% for each disease category. A teleretinal imaging protocol may be used to screen for other common ocular diseases. It may be feasible to use diabetic teleretinal photographs to screen patients for other potential eye diseases. Additional elements of the eye workup may be added to enhance accuracy of disease detection. Further study is necessary to confirm this initial retrospective review.

  18. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  19. A novel design of a scanning probe microscope integrated with an ultramicrotome for serial block-face nanotomography

    NASA Astrophysics Data System (ADS)

    Efimov, Anton E.; Agapov, Igor I.; Agapova, Olga I.; Oleinikov, Vladimir A.; Mezin, Alexey V.; Molinari, Michael; Nabiev, Igor; Mochalov, Konstantin E.

    2017-02-01

    We present a new concept of a combined scanning probe microscope (SPM)/ultramicrotome apparatus. It enables "slice-and-view" scanning probe nanotomography measurements and 3D reconstruction of the bulk sample nanostructure from series of SPM images after consecutive ultrathin sections. The sample is fixed on a flat XYZ scanning piezostage mounted on the ultramicrotome arm. The SPM measuring head with a cantilever tip and a laser-photodiode tip detection system approaches the sample for SPM measurements of the block-face surface immediately after the ultramicrotome sectioning is performed. The SPM head is moved along guides that are also fixed on the ultramicrotome arm. Thereby, relative dysfunctional displacements of the tip, the sample, and the ultramicrotome knife are minimized. The design of the SPM head enables open frontal optical access to the sample block-face adapted for high-resolution optical lenses for correlative SPM/optical microscopy applications. The new system can be used in a wide range of applications for the study of 3D nanostructures of biological objects, biomaterials, polymer nanocomposites, and nanohybrid materials in various SPM and optical microscopy measuring modes.

  20. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  1. Detecting frontal ablation processes from direct observations of submarine terminus morphology

    NASA Astrophysics Data System (ADS)

    Fried, M.; Carroll, D.; Catania, G. A.; Sutherland, D. A.; Stearns, L. A.; Bartholomaus, T. C.; Shroyer, E.; Nash, J. D.

    2017-12-01

    Tidewater glacier termini couple glacier and ocean systems. Subglacial discharge emerging from the terminus produces buoyant plumes that modulate submarine melting, calving, fjord circulation and, in turn, changes in ice dynamics from back-stress perturbations. However, the absence of critical observational data at the ice-ocean interface limits plume and, by extension, melt models from incorporating realistic submarine terminus face morphologies and assessing their impact on terminus behavior at tidewater glaciers. Here we present a comprehensive inventory and characterization of submarine terminus face shapes from a side-looking, multibeam echo sounding campaign across Kangerdlugssuaq Sermerssua glacier, central-west Greenland. We combine these observations with in-situ measurements of ocean stratification and remotely sensed subglacial discharge, terminus positions, ice velocity, and ice surface datasets to infer the spectrum of processes sculpting the submarine terminus face. Subglacial discharge outlet locations are confirmed through observations of sediment plumes, localized melt-driven undercutting of the terminus face, and bathymetry of the adjacent seafloor. From our analysis, we differentiate terminus morphologies resulting from submarine melt and calving and assess the contribution of each process to the net frontal ablation budget. Finally, we constrain a plume model using direct observations of the submarine terminus face and conduit geometry. Plume model simulations demonstrate that the majority of discharge outlets are fed by small discharge fluxes, suggestive of a distributed subglacial hydrologic system. Outlets with the largest, concentrated discharge fluxes are morphologically unique and strongly control seasonal terminus position. At these locations, we show that the spatiotemporal pattern of terminus retreat is well correlated with time periods when local melt rate exceeds ice velocity.

  2. Adaptation to Emotional Conflict: Evidence from a Novel Face Emotion Paradigm

    PubMed Central

    Clayson, Peter E.; Larson, Michael J.

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words. PMID:24073278

  3. Adaptation to emotional conflict: evidence from a novel face emotion paradigm.

    PubMed

    Clayson, Peter E; Larson, Michael J

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words.

  4. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  5. Detecting 'infant-directedness' in face and voice.

    PubMed

    Kim, Hojin I; Johnson, Scott P

    2014-07-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces. © 2014 John Wiley & Sons Ltd.

  6. Familiarity Enhances Visual Working Memory for Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Raymond, Jane E.

    2008-01-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or…

  7. Event-Related Brain Potential Correlates of Emotional Face Processing

    ERIC Educational Resources Information Center

    Eimer, Martin; Holmes, Amanda

    2007-01-01

    Results from recent event-related brain potential (ERP) studies investigating brain processes involved in the detection and analysis of emotional facial expression are reviewed. In all experiments, emotional faces were found to trigger an increased ERP positivity relative to neutral faces. The onset of this emotional expression effect was…

  8. Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas

    PubMed Central

    Moreau, Julien; Ambellouis, Sébastien; Ruichek, Yassine

    2017-01-01

    A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). PMID:28106746

  9. Exploring the Role of Spatial Frequency Information during Neural Emotion Processing in Human Infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-01-01

    Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.

  10. Development and test of photon-counting microchannel plate detector arrays for use on space telescopes

    NASA Technical Reports Server (NTRS)

    Timothy, J. G.

    1976-01-01

    The full sensitivity, dynamic range, and photometric stability of microchannel array plates(MCP) are incorporated into a photon-counting detection system for space operations. Components of the system include feedback-free MCP's for high gain and saturated output pulse-height distribution with a stable response; multi-anode readout arrays mounted in proximity focus with the output face of the MCP; and multi-layer ceramic headers to provide electrical interface between the anode array in a sealed detector tube and the associated electronics.

  11. HST images of the eclipsing pulsar B1957+20

    NASA Technical Reports Server (NTRS)

    Fruchter, Andrew S.; Bookbinder, Jay; Bailyn, Charles D.

    1995-01-01

    We have obtained images of the eclipsing pulsar binary PSR B1957+20 using the Planetary Camera of the Hubble Space Telescope (HST). The high spatial resolution of this instrument has allowed us to separate the pulsar system from a nearby background star which has confounded ground-based observations of this system near optical minimum. Our images limit the temperature of the backside of the companion to T less than or approximately = 2800 K, about a factor of 2 less than the average temperature of the side of the companion facing the pulsar, and provide a marginal detection of the companion at optical minimum. The magnitude of this detection is consistent with previous work which suggests that the companion nearly fills its Roche lobe and is supported through tidal dissipation.

  12. CMOS-MEMS Chemiresistive and Chemicapacitive Chemical Sensor System

    NASA Astrophysics Data System (ADS)

    Lazarus, Nathan S.

    Integrating chemical sensors with testing electronics is a powerful technique with the potential to lower power and cost and allow for lower system limits of detection. This thesis explores the possibility of creating an integrated sensor system intended to be embedded within respirator cartridges to notify the user that hazardous chemicals will soon leak into the face mask. For a chemical sensor designer, this application is particularly challenging due to the need for a very sensitive and cheap sensor that will be exposed to widely varying environmental conditions during use. An octanethiol-coated gold nanoparticle chemiresistor to detect industrial solvents is developed, focusing on characterizing the environmental stability and limits of detection of the sensor. Since the chemiresistor was found to be highly sensitive to water vapor, a series of highly sensitive humidity sensor topologies were developed, with sensitivities several times previous integrated capacitive humidity sensors achieved. Circuit techniques were then explored to reduce the humidity sensor limits of detection, including the analysis of noise, charge injection, jitter and clock feedthrough in a charge-based capacitance measurement (CBCM) circuit and the design of a low noise Colpitts LC oscillator. The characterization of high resistance gold nanoclusters for capacitive chemical sensing was also performed. In the final section, a preconcentrator, a heater element intended to release a brief concentrated pulse of analate, was developed and tested for the purposes of lowering the system limit of detection.

  13. A Comparison Between Optical Coherence Tomography Angiography and Fluorescein Angiography for the Imaging of Type 1 Neovascularization.

    PubMed

    Inoue, Maiko; Jung, Jesse J; Balaratnasingam, Chandrakumar; Dansingani, Kunal K; Dhrami-Gavazi, Elona; Suzuki, Mihoko; de Carlo, Talisa E; Shahlaee, Abtin; Klufas, Michael A; El Maftouhi, Adil; Duker, Jay S; Ho, Allen C; Maftouhi, Maddalena Quaranta-El; Sarraf, David; Freund, K Bailey

    2016-07-01

    To determine the sensitivity of the combination of optical coherence tomography angiography (OCTA) and structural optical coherence tomography (OCT) for detecting type 1 neovascularization (NV) and to determine significant factors that preclude visualization of type 1 NV using OCTA. Multicenter, retrospective cohort study of 115 eyes from 100 patients with type 1 NV. A retrospective review of fluorescein (FA), OCT, and OCTA imaging was performed on a consecutive series of eyes with type 1 NV from five institutions. Unmasked graders utilized FA and structural OCT data to determine the diagnosis of type 1 NV. Masked graders evaluated FA data alone, en face OCTA data alone and combined en face OCTA and structural OCT data to determine the presence of type 1 NV. Sensitivity analyses were performed using combined FA and OCT data as the reference standard. A total of 105 eyes were diagnosed with type 1 NV using the reference. Of these, 90 (85.7%) could be detected using en face OCTA and structural OCT. The sensitivities of FA data alone and en face OCTA data alone for visualizing type 1 NV were the same (66.7%). Significant factors that precluded visualization of NV using en face OCTA included the height of pigment epithelial detachment, low signal strength, and treatment-naïve disease (P < 0.05, respectively). En face OCTA and structural OCT showed better detection of type 1 NV than either FA alone or en face OCTA alone. Combining en face OCTA and structural OCT information may therefore be a useful way to noninvasively diagnose and monitor the treatment of type 1 NV.

  14. Fabrication of optical fiber sensor based on double-layer SU-8 diaphragm and the partial discharge detection

    NASA Astrophysics Data System (ADS)

    Shang, Ya-na; Ni, Qing-yan; Ding, Ding; Chen, Na; Wang, Ting-yun

    2015-01-01

    In this paper, a partial discharge detection system is proposed using an optical fiber Fabry-Perot (FP) interferometric sensor, which is fabricated by photolithography. SU-8 photoresist is employed due to its low Young's modulus and potentially high sensitivity for ultrasound detection. The FP cavity is formed by coating the fiber end face with two layers of SU-8 so that the cavity can be controlled by the thickness of the middle layer of SU-8. Static pressure measurement experiments are done to estimate the sensing performance. The results show that the SU-8 based sensor has a sensitivity of 154.8 nm/kPa, which is much higher than that of silica based sensor under the same condition. Moreover, the sensor is demonstrated successfully to detect ultrasound from electrode discharge.

  15. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  16. Driver Distraction Using Visual-Based Sensors and Algorithms.

    PubMed

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  17. Driver Distraction Using Visual-Based Sensors and Algorithms

    PubMed Central

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-01-01

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822

  18. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  19. Active glass-type human augmented cognition system considering attention and intention

    NASA Astrophysics Data System (ADS)

    Kim, Bumhwi; Ojha, Amitash; Lee, Minho

    2015-10-01

    Human cognition is the result of an interaction of several complex cognitive processes with limited capabilities. Therefore, the primary objective of human cognitive augmentation is to assist and expand these limited human cognitive capabilities independently or together. In this study, we propose a glass-type human augmented cognition system, which attempts to actively assist human memory functions by providing relevant, necessary and intended information by constantly assessing intention of the user. To achieve this, we exploit selective attention and intention processes. Although the system can be used in various real-life scenarios, we test the performance of the system in a person identity scenario. To detect the intended face, the system analyses the gaze points and change in pupil size to determine the intention of the user. An assessment of the gaze points and change in pupil size together indicates that the user intends to know the identity and information about the person in question. Then, the system retrieves several clues through speech recognition system and retrieves relevant information about the face, which is finally displayed through head-mounted display. We present the performance of several components of the system. Our results show that the active and relevant assistance based on users' intention significantly helps the enhancement of memory functions.

  20. Eccentricity in planetary systems and the role of binarity. Sample definition, initial results, and the system of HD 211847

    NASA Astrophysics Data System (ADS)

    Moutou, C.; Vigan, A.; Mesa, D.; Desidera, S.; Thébault, P.; Zurlo, A.; Salter, G.

    2017-06-01

    We explore the multiplicity of exoplanet host stars with high-resolution images obtained with VLT/SPHERE. Two different samples of systems were observed: one containing low-eccentricity outer planets, and the other containing high-eccentricity outer planets. We find that 10 out of 34 stars in the high-eccentricity systems are members of a binary, while the proportion is 3 out of 27 for circular systems. Eccentric-exoplanet hosts are, therefore, significantly more likely to have a stellar companion than circular-exoplanet hosts. The median magnitude contrast over the 68 data sets is 11.26 and 9.25, in H and K, respectively, at 0.30 arcsec. The derived detection limits reveal that binaries with separations of less than 50 au are rarer for exoplanet hosts than for field stars. Our results also imply that the majority of high-eccentricity planets are not embedded in multiple stellar systems (24 out of 34), since our detection limits exclude the presence of a stellar companion. We detect the low-mass stellar companions of HD 7449 and HD 211847, both members of our high-eccentricity sample. HD 7449B was already detected and our independent observation is in agreement with this earlier work. HD 211847's substellar companion, previously detected by the radial velocity method, is actually a low-mass star seen face-on. The role of stellar multiplicity in shaping planetary systems is confirmed by this work, although it does not appear as the only source of dynamical excitation. Based on observations collected with SPHERE on the Very Large Telescope (ESO, Chile).

  1. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  2. Neural Computation as a Tool to Differentiate Perceptual from Emotional Processes: The Case of Anger Superiority Effect

    ERIC Educational Resources Information Center

    Mermillod, Martial; Vermeulen, Nicolas; Lundqvist, Daniel; Niedenthal, Paula M.

    2009-01-01

    Research findings in social and cognitive psychology imply that it is easier to detect angry faces than happy faces in a crowd of neutral faces [Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd--An anger superiority effect. "Journal of Personality and Social Psychology," 54(6), 917-924]. This phenomenon has been held to have…

  3. On the flexibility of social source memory: a test of the emotional incongruity hypothesis.

    PubMed

    Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang

    2012-11-01

    A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was observed in previous studies. Here, we examined whether positive or negative expectancies would influence source memory for cheaters and cooperators. A cooperation task with virtual opponents was used in Experiments 1 and 2. Source memory for the emotionally incongruent information was enhanced relative to the congruent information: In Experiment 1, source memory was best for cheaters with likable faces and for cooperators with unlikable faces; in Experiment 2, source memory was better for smiling cheater faces than for smiling cooperator faces, and descriptively better for angry cooperator faces than for angry cheater faces. Experiments 3 and 4 showed that the emotional incongruity effect generalizes to 3rd-party reputational information (descriptions of cheating and trustworthy behavior). The results are inconsistent with the assumption of a highly specific cheater detection module. Focusing on expectancy-incongruent information may represent a more efficient, general, and hence more adaptive memory strategy for remembering exchange-relevant information than focusing only on cheaters.

  4. A randomized trial of face-to-face counselling versus telephone counselling versus bibliotherapy for occupational stress.

    PubMed

    Kilfedder, Catherine; Power, Kevin; Karatzias, Thanos; McCafferty, Aileen; Niven, Karen; Chouliara, Zoë; Galloway, Lisa; Sharp, Stephen

    2010-09-01

    The aim of the present study was to compare the effectiveness and acceptability of three interventions for occupational stress. A total of 90 National Health Service employees were randomized to face-to-face counselling or telephone counselling or bibliotherapy. Outcomes were assessed at post-intervention and 4-month follow-up. Clinical Outcomes in Routine Evaluation (CORE), General Health Questionnaire (GHQ-12), and Perceived Stress Scale (PSS-10) were used to evaluate intervention outcomes. An intention-to-treat analyses was performed. Repeated measures analysis revealed significant time effects on all measures with the exception of CORE Risk. No significant group effects were detected on all outcome measures. No time by group significant interaction effects were detected on any of the outcome measures with the exception of CORE Functioning and GHQ total. With regard to acceptability of interventions, participants expressed a preference for face-to-face counselling over the other two modalities. Overall, it was concluded that the three intervention groups are equally effective. Given that bibliotherapy is the least costly of the three, results from the present study might be considered in relation to a stepped care approach to occupational stress management with bibliotherapy as the first line of intervention, followed by telephone and face-to-face counselling as required.

  5. Pornographic information of Internet views detection method based on the connected areas

    NASA Astrophysics Data System (ADS)

    Wang, Huibai; Fan, Ajie

    2017-01-01

    Nowadays online porn video broadcasting and downloading is very popular. In view of the widespread phenomenon of Internet pornography, this paper proposed a new method of pornographic video detection based on connected areas. Firstly, decode the video into a serious of static images and detect skin color on the extracted key frames. If the area of skin color reaches a certain threshold, use the AdaBoost algorithm to detect the human face. Judge the connectivity of the human face and the large area of skin color to determine whether detect the sensitive area finally. The experimental results show that the method can effectively remove the non-pornographic videos contain human who wear less. This method can improve the efficiency and reduce the workload of detection.

  6. Newborns' Mooney-Face Perception

    ERIC Educational Resources Information Center

    Leo, Irene; Simion, Francesca

    2009-01-01

    The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…

  7. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  8. Neural evidence for the subliminal processing of facial trustworthiness in infancy.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-04-22

    Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Electrophysiological evidence for attentional capture by irrelevant angry facial expressions: Naturalistic faces.

    PubMed

    Burra, Nicolas; Coll, Sélim Yahia; Barras, Caroline; Kerzel, Dirk

    2017-01-10

    Recently, research on lateralized event related potentials (ERPs) in response to irrelevant distractors has revealed that angry but not happy schematic distractors capture spatial attention. Whether this effect occurs in the context of the natural expression of emotions is unknown. To fill this gap, observers were asked to judge the gender of a natural face surrounded by a color singleton among five other face identities. In contrast to previous studies, the similarity between the task-relevant feature (color) and the distractor features was low. On some trials, the target was displayed concurrently with an irrelevant angry or happy face. The lateralized ERPs to these distractors were measured as a marker of spatial attention. Our results revealed that angry face distractors, but not happy face distractors, triggered a P D , which is a marker of distractor suppression. Subsequent to the P D , angry distractors elicited a larger N450 component, which is associated with conflict detection. We conclude that threatening expressions have a high attentional priority because of their emotional value, resulting in early suppression and late conflict detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Integrated display scanner

    DOEpatents

    Veligdan, James T.

    2004-12-21

    A display scanner includes an optical panel having a plurality of stacked optical waveguides. The waveguides define an inlet face at one end and a screen at an opposite end, with each waveguide having a core laminated between cladding. A projector projects a scan beam of light into the panel inlet face for transmission from the screen as a scan line to scan a barcode. A light sensor at the inlet face detects a return beam reflected from the barcode into the screen. A decoder decodes the return beam detected by the sensor for reading the barcode. In an exemplary embodiment, the optical panel also displays a visual image thereon.

  11. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  12. Real-time determination of the efficacy of residual disinfection to limit wastewater contamination in a water distribution system using filtration-based luminescence.

    PubMed

    Lee, Jiyoung; Deininger, Rolf A

    2010-05-01

    Water distribution systems can be vulnerable to microbial contamination through cross-connections, wastewater backflow, the intrusion of soiled water after a loss of pressure resulting from an electricity blackout, natural disaster, or intentional contamination of the system in a bioterrrorism event. The most urgent matter a water treatment utility would face in this situation is detecting the presence and extent of a contamination event in real-time, so that immediate action can be taken to mitigate the problem. The current approved microbiological detection methods are culture-based plate count methods, which require incubation time (1 to 7 days). This long period of time would not be useful for the protection of public health. This study was designed to simulate wastewater intrusion in a water distribution system. The objectives were 2-fold: (1) real-time detection of water contamination, and (2) investigation of the sustainability of drinking water systems to suppress the contamination with secondary disinfectant residuals (chlorine and chloramine). The events of drinking water contamination resulting from a wastewater addition were determined by filtration-based luminescence assay. The water contamination was detected by luminescence method within 5 minutes. The signal amplification attributed to wastewater contamination was clear-102-fold signal increase. After 1 hour, chlorinated water could inactivate 98.8% of the bacterial contaminant, while chloraminated water reduced 77.2%.

  13. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  14. An Evaluation of Stereoscopic Digital Mammography for Earlier Detection of Breast Cancer and Reduced Rate of Recall

    DTIC Science & Technology

    2004-08-01

    on a pair of high -resolution, LCD medical monitors. The change to the new workstation has required us to rewrite the software... In the original CRT-based system, the two 7 images forming a stereo pair were displayed alternately on the same CRT face, at a high frame rate (120 Hz...then, separately, receive the stereo screening exam on the research GE digital mammography unit.

  15. FORTRAN Automated Code Evaluation System (faces) system documentation, version 2, mod 0. [error detection codes/user manuals (computer programs)

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.

  16. Optimum Sensors Integration for Multi-Sensor Multi-Target Environment for Ballistic Missile Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Barhen, Jacob; Glover, Charles Wayne

    2012-01-01

    Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.

  17. Long-term animal experiments with an intraventricular axial flow blood pump.

    PubMed

    Yamazaki, K; Kormos, R L; Litwak, P; Tagusari, O; Mori, T; Antaki, J F; Kameneva, M; Watach, M; Gordon, L; Mukuo, H; Umezu, M; Tomioka, J; Outa, E; Griffith, B P; Koyanagai, H

    1997-01-01

    A miniature intraventricular axial flow blood pump (IVAP) is undergoing in vivo evaluation in calves. The IVAP system consists of a miniature (phi 13.9 mm) axial flow pump that resides within the left ventricular (LV) chamber and a brushless DC motor. The pump is fabricated from titanium alloy, and the pump weight is 170 g. It produces a flow rate of over 5 L/min against 100 mmHg pressure at 9,000 rpm with an 8 W total power consumption. The maximum total efficiency exceeds 17%. A purged lip seal system is used in prototype no. 8, and a newly developed "Cool-Seal" (a low temperature mechanical seal) is used in prototype no. 9. In the Cool-Seal system, a large amount of purge flow is introduced behind the seal faces to augment convective heat transfer, keeping the seal face temperature at a low level for prevention of heat denaturation of blood proteins. The Cool-Seal system consumes < 10 cc purge fluid per day and has greatly extended seal life. The pumps were implanted in three calves (26, 30, and 168 days of support). The pump was inserted through a left thoracotomy at the fifth intercostal space. Two pursestring sutures were placed on the LV apex, and the apex was cored with a myocardial punch. The pump was inserted into the LV with the outlet cannula smoothly passing through the aortic valve without any difficulty. Only 5 min elapsed between the time of chest opening and initiation of pumping. Pump function remained stable throughout in all experiments. No cardiac arrhythmias were detected, even at treadmill exercise tests. The plasma free hemoglobin level remained in the acceptable range. Post mortem examination did not reveal any interference between the pump and the mitral apparatus. No major thromboembolism was detected in the vital organs in Cases 1 or 2, but a few small renal infarcts were detected in Case 3.

  18. Construct and face validity of the American College of Surgeons/Association of Program Directors in Surgery laparoscopic troubleshooting team training exercise.

    PubMed

    Arain, Nabeel A; Hogg, Deborah C; Gala, Rajiv B; Bhoja, Ravi; Tesfay, Seifu T; Webb, Erin M; Scott, Daniel J

    2012-01-01

    Our aim was to develop an objective scoring system and evaluate construct and face validity for a laparoscopic troubleshooting team training exercise. Surgery and gynecology novices (n = 14) and experts (n = 10) participated. Assessments included the following: time-out, scenario decision making (SDM) score (based on essential treatments rendered and completion time), operating room communication assessment (investigator developed), line operations safety audits (teamwork), and National Aeronautics and Space Administration-Task Load Index (workload). Significant differences were detected for SDM scores for scenarios 1 (192 vs 278; P = .01) and 3 (129 vs 225; P = .004), operating room communication assessment (67 vs 91; P = .002), and line operations safety audits (58 vs 87; P = .001), but not for time-out (46 vs 51) or scenario 2 SDM score (301 vs 322). Workload was similar for both groups and face validity (8.8 on a 10-point scale) was strongly supported. Objective decision-making scoring for 2 of 3 scenarios and communication and teamwork ratings showed construct validity. Face validity and participant feedback were excellent. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  20. Is Beauty in the Face of the Beholder?

    PubMed Central

    Laeng, Bruno; Vermeer, Oddrun; Sulutvedt, Unni

    2013-01-01

    Opposing forces influence assortative mating so that one seeks a similar mate while at the same time avoiding inbreeding with close relatives. Thus, mate choice may be a balancing of phenotypic similarity and dissimilarity between partners. In the present study, we assessed the role of resemblance to Self’s facial traits in judgments of physical attractiveness. Participants chose the most attractive face image of their romantic partner among several variants, where the faces were morphed so as to include only 22% of another face. Participants distinctly preferred a “Self-based morph” (i.e., their partner’s face with a small amount of Self’s face blended into it) to other morphed images. The Self-based morph was also preferred to the morph of their partner’s face blended with the partner’s same-sex “prototype”, although the latter face was (“objectively”) judged more attractive by other individuals. When ranking morphs differing in level of amalgamation (i.e., 11% vs. 22% vs. 33%) of another face, the 22% was chosen consistently as the preferred morph and, in particular, when Self was blended in the partner’s face. A forced-choice signal-detection paradigm showed that the effect of self-resemblance operated at an unconscious level, since the same participants were unable to detect the presence of their own faces in the above morphs. We concluded that individuals, if given the opportunity, seek to promote “positive assortment” for Self’s phenotype, especially when the level of similarity approaches an optimal point that is similar to Self without causing a conscious acknowledgment of the similarity. PMID:23874608

  1. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  2. Area X-ray or UV camera system for high-intensity beams

    DOEpatents

    Chapman, Henry N.; Bajt, Sasa; Spiller, Eberhard A.; Hau-Riege, Stefan , Marchesini, Stefano

    2010-03-02

    A system in one embodiment includes a source for directing a beam of radiation at a sample; a multilayer mirror having a face oriented at an angle of less than 90 degrees from an axis of the beam from the source, the mirror reflecting at least a portion of the radiation after the beam encounters a sample; and a pixellated detector for detecting radiation reflected by the mirror. A method in a further embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample; not reflecting at least a majority of the radiation that is not diffracted by the sample; and detecting at least some of the reflected radiation. A method in yet another embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample using a multilayer mirror; and detecting at least some of the reflected radiation.

  3. Mild Depression Detection of College Students: an EEG-Based Solution with Free Viewing Tasks.

    PubMed

    Li, Xiaowei; Hu, Bin; Shen, Ji; Xu, Tingting; Retcliffe, Martyn

    2015-12-01

    Depression is a common mental disorder with growing prevalence; however current diagnoses of depression face the problem of patient denial, clinical experience and subjective biases from self-report. By using a combination of linear and nonlinear EEG features in our research, we aim to develop a more accurate and objective approach to depression detection that supports the process of diagnosis and assists the monitoring of risk factors. By classifying EEG features during free viewing task, an accuracy of 99.1%, which is the highest to our knowledge by far, was achieved using kNN classifier to discriminate depressed and non-depressed subjects. Furthermore, through correlation analysis, comparisons of performance on each electrode were discussed on the availability of single channel EEG recording depression detection system. Combined with wearable EEG collecting devices, our method offers the possibility of cost effective wearable ubiquitous system for doctors to monitor their patients with depression, and for normal people to understand their mental states in time.

  4. The Road to the Common PET/CT Detector

    NASA Astrophysics Data System (ADS)

    Nassalski, Antoni; Moszynski, Marek; Szczesniak, Tomasz; Wolski, Dariusz; Batsch, Tadeusz

    2007-10-01

    Growing interest in the development of dual modality positron emission/X-rays tomography (PET/CT) systems prompts researchers to face a new challenge: to acquire both the anatomical and functional information in the same measurement, simultaneously using the same detection system and electronics. The aim of this work was to study a detector consisting of LaBr3, LSO or LYSO pixel crystals coupled to an avalanche photodiode (APD). The measurements covered tests of the detectors in PET and CT modes, respectively. The measurements included the determination of light output, energy resolution, the non-proportionality of the light yield and the time resolution for 511 keV annihilation quanta; analysis also included characterizing the PET detector, and determining the dependence of counting rate versus mean current of the APD in the X-ray detection. In the present experiment, the use of counting and current modes in the CT detection increases the dynamic range of the measured dose of X-rays by a factor of 20, compared to the counting mode alone.

  5. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    PubMed

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.

  6. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    PubMed Central

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-01-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031

  7. Optical filter for highlighting spectral features part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.

  8. Brief Report: Reduced Prioritization of Facial Threat in Adults with Autism

    ERIC Educational Resources Information Center

    Sasson, Noah J.; Shasteen, Jonathon R.; Pinkham, Amy E.

    2016-01-01

    Typically-developing (TD) adults detect angry faces more efficiently within a crowd than non-threatening faces. Prior studies of this social threat superiority effect (TSE) in ASD using tasks consisting of schematic faces and homogeneous crowds have produced mixed results. Here, we employ a more ecologically-valid test of the social TSE and find…

  9. A survey of the dummy face and human face stimuli used in BCI paradigm.

    PubMed

    Chen, Long; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2015-01-15

    It was proved that the human face stimulus were superior to the flash only stimulus in BCI system. However, human face stimulus may lead to copyright infringement problems and was hard to be edited according to the requirement of the BCI study. Recently, it was reported that facial expression changes could be done by changing a curve in a dummy face which could obtain good performance when it was applied to visual-based P300 BCI systems. In this paper, four different paradigms were presented, which were called dummy face pattern, human face pattern, inverted dummy face pattern and inverted human face pattern, to evaluate the performance of the dummy faces stimuli compared with the human faces stimuli. The key point that determined the value of dummy faces in BCI systems were whether dummy faces stimuli could obtain as good performance as human faces stimuli. Online and offline results of four different paradigms would have been obtained and comparatively analyzed. Online and offline results showed that there was no significant difference among dummy faces and human faces in ERPs, classification accuracy and information transfer rate when they were applied in BCI systems. Dummy faces stimuli could evoke large ERPs and obtain as high classification accuracy and information transfer rate as the human faces stimuli. Since dummy faces were easy to be edited and had no copyright infringement problems, it would be a good choice for optimizing the stimuli of BCI systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. SSV Launch Monitoring Strategies: HGDS Design Implementation Through System Maturity

    NASA Technical Reports Server (NTRS)

    Shoemaker, Marc D.; Crimi, Thomas

    2010-01-01

    With over 500,000 gallons of liquid hydrogen and liquid oxygen, it is of vital importance to monitor the space shuttle vehicle (SSV) from external tank (ET) load through launch. The Hazardous Gas Detection System (HGDS) was installed as the primary system responsible for monitoring fuel leaks within the orbiter and ET. The HGDS was designed to obtain the lowest possible detection limits with the best resolution while monitoring the SSV for any hydrogen, helium, oxygen, or argon as the main requirement. The HGDS is a redundant mass spectrometer used for real-time monitoring during Power Reactant Storage and Distribution (PRSD) load and ET load through launch or scrub. This system also performs SSV processing leak checks of the Tail Service Mast (TSM) umbilical quick disconnects (QD's), Ground Umbilical Carrier Plate (GUCP) QD's and supports auxiliary power unit (APU) system tests. From design to initial implementation and operations, the HGDS has evolved into a mature and reliable launch support system. This paper will discuss the operational challenges and lessons learned from facing design deficiencies, validation and maintenance efforts, life cycle issues, and evolving requirements

  11. Thermal imaging to detect physiological indicators of stress in humans

    NASA Astrophysics Data System (ADS)

    Cross, Carl B.; Skipper, Julie A.; Petkie, Douglas T.

    2013-05-01

    Real-time, stand-off sensing of human subjects to detect emotional state would be valuable in many defense, security and medical scenarios. We are developing a multimodal sensor platform that incorporates high-resolution electro-optical and mid-wave infrared (MWIR) cameras and a millimeter-wave radar system to identify individuals who are psychologically stressed. Recent experiments have aimed to: 1) assess responses to physical versus psychological stressors; 2) examine the impact of topical skin products on thermal signatures; and 3) evaluate the fidelity of vital signs extracted from thermal imagery and radar signatures. Registered image and sensor data were collected as subjects (n=32) performed mental and physical tasks. In each image, the face was segmented into 29 non-overlapping segments based on fiducial points automatically output by our facial feature tracker. Image features were defined that facilitated discrimination between psychological and physical stress states. To test the ability to intentionally mask thermal responses indicative of anxiety or fear, subjects applied one of four topical skin products to one half of their face before performing tasks. Finally, we evaluated the performance of two non-contact techniques to detect respiration and heart rate: chest displacement extracted from the radar signal and temperature fluctuations at the nose tip and regions near superficial arteries to detect respiration and heart rates, respectively, extracted from the MWIR imagery. Our results are very satisfactory: classification of physical versus psychological stressors is repeatedly greater than 90%, thermal masking was almost always ineffective, and accurate heart and respiration rates are detectable in both thermal and radar signatures.

  12. Applying LED in full-field optical coherence tomography for gastrointestinal endoscopy

    NASA Astrophysics Data System (ADS)

    Yang, Bor-Wen; Wang, Yu-Yen; Juan, Yu-Shan; Hsu, Sheng-Jie

    2015-08-01

    Optical coherence tomography (OCT) has become an important medical imaging technology due to its non-invasiveness and high resolution. Full-field optical coherence tomography (FF-OCT) is a scanning scheme especially suitable for en face imaging as it employs a CMOS/CCD device for parallel pixels processing. FF-OCT can also be applied to high-speed endoscopic imaging. Applying cylindrical scanning and a right-angle prism, we successfully obtained a 360° tomography of the inner wall of an intestinal cavity through an FF-OCT system with an LED source. The 10-μm scale resolution enables the early detection of gastrointestinal lesions, which can increase detection rates for esophageal, stomach, or vaginal cancer. All devices used in this system can be integrated by MOEMS technology to contribute to the studies of gastrointestinal medicine and advanced endoscopy technology.

  13. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  14. Nondestructive Evaluation (NDE) for Inspection of Composite Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Parker, F. Raymond

    2014-01-01

    Composite honeycomb structures are widely used in aerospace applications due to their low weight and high strength advantages. Developing nondestructive evaluation (NDE) inspection methods are essential for their safe performance. Flash thermography is a commonly used technique for composite honeycomb structure inspections due to its large area and rapid inspection capability. Flash thermography is shown to be sensitive for detection of face sheet impact damage and face sheet to core disbond. Data processing techniques, using principal component analysis to improve the defect contrast, are discussed. Limitations to the thermal detection of the core are investigated. In addition to flash thermography, X-ray computed tomography is used. The aluminum honeycomb core provides excellent X-ray contrast compared to the composite face sheet. The X-ray CT technique was used to detect impact damage, core crushing, and skin to core disbonds. Additionally, the X-ray CT technique is used to validate the thermography results.

  15. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions

    PubMed Central

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression. PMID:25206321

  16. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    PubMed

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression.

  17. A new paradigm of oral cancer detection using digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.

    2016-03-01

    Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.

  18. Automated Inspection of Defects in Optical Fiber Connector End Face Using Novel Morphology Approaches.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun; Hu, Yang

    2018-05-03

    Increasing deployment of optical fiber networks and the need for reliable high bandwidth make the task of inspecting optical fiber connector end faces a crucial process that must not be neglected. Traditional end face inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. More seriously, the inspection results cannot be quantified for subsequent analysis. Aiming at the characteristics of typical defects in the inspection process for optical fiber end faces, we propose a novel method, “difference of min-max ranking filtering” (DO2MR), for detection of region-based defects, e.g., dirt, oil, contamination, pits, and chips, and a special model, a “linear enhancement inspector” (LEI), for the detection of scratches. The DO2MR is a morphology method that intends to determine whether a pixel belongs to a defective region by comparing the difference of gray values of pixels in the neighborhood around the pixel. The LEI is also a morphology method that is designed to search for scratches at different orientations with a special linear detector. These two approaches can be easily integrated into optical inspection equipment for automatic quality verification. As far as we know, this is the first time that complete defect detection methods for optical fiber end faces are available in the literature. Experimental results demonstrate that the proposed DO2MR and LEI models yield good comprehensive performance with high precision and accepted recall rates, and the image-level detection accuracies reach 96.0 and 89.3%, respectively.

  19. Adsorption mechanisms of the nonequilibrium incorporation of admixtures in a growing crystal

    NASA Astrophysics Data System (ADS)

    Franke, V. D.; Punin, Yu. O.; Smetannikova, O. G.; Kenunen, D. S.

    2007-12-01

    The nonequilibrium partition of components between a crystal and solution is mainly controlled by impurity adsorption on the surface of the growing crystal. The specificity of adsorption on the faces of various simple forms leads to the sectorial zoning of crystals. This effect was studied experimentally for several crystallizing systems with different impurities, including isomorphous, 2d-isomorphous, and nonisomorphous, readily adsorbed impurities. In all systems, the sectorial selectivity of impurity incorporation into host crystals has been detected with partition coefficients many times higher than in the case of equilibrium partition. Specific capture of impurities by certain faces is accompanied by inhibition of their growth and modification of habit. The decrease in nonequilibrium partition coefficients with degree of oversaturation provides entrapment of impurities in the growing crystals. Thereby, the adsorption mechanism works in much the same mode for impurities of quite different nature. The behavior of partition coefficient differs drastically from impurity capturing by diffusion mechanism.

  20. Modular expert system for the diagnosis of operating conditions of industrial anaerobic digestion plants.

    PubMed

    Lardon, L; Puñal, A; Martinez, J A; Steyer, J P

    2005-01-01

    Anaerobic digestion (AD) plants are highly efficient wastewater treatment processes with possible energetic valorisation. Despite these advantages, many industries are still reluctant to use them because of their instability in the face of changes in operating conditions. To the face this drawback and to enhance the industrial use of anaerobic digestion, one solution is to develop and to implement knowledge base (KB) systems that are able to detect and to assess in real-time the quality of operating conditions of the processes. Case-based techniques and heuristic approaches have been already tested and validated on AD processes but two major properties were lacking: modularity of the system (the knowledge base system should be easily tuned on a new process and should still work if one or more sensors are added or removed) and uncertainty management (the assessment of the KB system should remain relevant even in the case of too poor or conflicting information sources). This paper addresses these two points and presents a modular KB system where an uncertain reasoning formalism is used to combine partial and complementary fuzzy diagnosis modules. Demonstration of the interest of the approach is provided from real-life experiments performed on an industrial 2,000 m3 CSTR anaerobic digester.

  1. Collaborative Accounting Problem Solving via Group Support Systems in a Face-to-Face versus Distant Learning Environment.

    ERIC Educational Resources Information Center

    Burke, Jacqueline A.

    2001-01-01

    Accounting students (n=128) used either face-to-face or distant Group support systems to complete collaborative tasks. Participation and social presence perceptions were significantly higher face to face. Task difficulty did not affect participation in either environment. (Contains 54 references.) (JOW)

  2. More efficient rejection of happy than of angry face distractors in visual search.

    PubMed

    Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich

    2006-12-01

    In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.

  3. Discrimination between smiling faces: Human observers vs. automated face analysis.

    PubMed

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Analysis of the development of missile-borne IR imaging detecting technologies

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Wang, Feng

    2017-10-01

    Today's infrared imaging guiding missiles are facing many challenges. With the development of targets' stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key technologies and its development trends of missiles' IR imaging detecting technologies are analyzed.

  5. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  6. Locating faces in color photographs using neural networks

    NASA Astrophysics Data System (ADS)

    Brown, Joe R.; Talley, Jim

    1994-03-01

    This paper summarizes a research effort in finding the locations and sizes of faces in color images (photographs, video stills, etc.) if, in fact, faces are presented. Scenarios for using such a system include serving as the means of localizing skin for automatic color balancing during photo processing or it could be used as a front-end in a customs port of energy context for a system which identified persona non grata given a database of known faces. The approach presented here is a hybrid system including: a neural pre-processor, some conventional image processing steps, and a neural classifier as the final face/non-face discriminator. Neither the training (containing 17,655 faces) nor the test (containing 1829 faces) imagery databases were constrained in their content or quality. The results for the pilot system are reported along with a discussion for improving the current system.

  7. Face repetition detection and social interest: An ERP study in adults with and without Williams syndrome.

    PubMed

    Key, Alexandra P; Dykens, Elisabeth M

    2016-12-01

    The present study examined possible neural mechanisms underlying increased social interest in persons with Williams syndrome (WS). Visual event-related potentials (ERPs) during passive viewing were used to compare incidental memory traces for repeated vs. single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 26 adults with WS and 26 typical adults. Results indicated that participants with WS developed familiarity with the repeated faces and houses (frontal N400 response), but only typical adults evidenced the parietal old/new effect (previously associated with stimulus recollection) for the repeated faces. There was also no evidence of exceptional salience of social information in WS, as ERP markers of memory for repeated faces vs. houses were not significantly different. Thus, while persons with WS exhibit behavioral evidence of increased social interest, their processing of social information in the absence of specific instructions may be relatively superficial. The ERP evidence of face repetition detection in WS was independent of IQ and the earlier perceptual differentiation of social vs. nonsocial stimuli. Large individual differences in ERPs of participants with WS may provide valuable information for understanding the WS phenotype and have relevance for educational and treatment purposes.

  8. Method for secure electronic voting system: face recognition based approach

    NASA Astrophysics Data System (ADS)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  9. Formal implementation of a performance evaluation model for the face recognition system.

    PubMed

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  10. Who is who: areas of the brain associated with recognizing and naming famous faces.

    PubMed

    Giussani, Carlo; Roux, Franck-Emmanuel; Bello, Lorenzo; Lauwers-Cances, Valérie; Papagno, Costanza; Gaini, Sergio M; Puel, Michelle; Démonet, Jean-François

    2009-02-01

    It has been hypothesized that specific brain regions involved in face naming may exist in the brain. To spare these areas and to gain a better understanding of their organization, the authors studied patients who underwent surgery by using direct electrical stimulation mapping for brain tumors, and they compared an object-naming task to a famous face-naming task. Fifty-six patients with brain tumors (39 and 17 in the left and right hemispheres, respectively) and with no significant preoperative overall language deficit were prospectively studied over a 2-year period. Four patients who had a partially selective famous face anomia and 2 with prosopagnosia were not included in the final analysis. Face-naming interferences were exclusively localized in small cortical areas (< 1 cm2). Among 35 patients whose dominant left hemisphere was studied, 26 face-naming specific areas (that is, sites of interference in face naming only and not in object naming) were found. These face naming-specific sites were significantly detected in 2 regions: in the left frontal areas of the superior, middle, and inferior frontal gyri (p < 0.001) and in the anterior part of the superior and middle temporal gyri (p < 0.01). Variable patterns of interference were observed (speech arrest, anomia, phonemic, or semantic paraphasia) probably related to the different stages in famous face processing. Only 4 famous face-naming interferences were found in the right hemisphere. Relative anatomical segregation of naming categories within language areas was detected. This study showed that famous face naming was preferentially processed in the left frontal and anterior temporal gyri. The authors think it is necessary to adapt naming tasks in neurosurgical patients to the brain region studied.

  11. Systems, Apparatuses, and Methods for Using Durable Adhesively Bonded Joints for Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, III, Stanley S. (Inventor); Lundgren, Eric C. (Inventor)

    2014-01-01

    Systems, methods, and apparatus for increasing durability of adhesively bonded joints in a sandwich structure. Such systems, methods, and apparatus includes an first face sheet and an second face sheet as well as an insert structure, the insert structure having a first insert face sheet, a second insert face sheet, and an insert core material. In addition, sandwich core material is arranged between the first face sheet and the second face sheet. A primary bondline may be coupled to the face sheet(s) and the splice. Further, systems, methods, and apparatus of the present disclosure advantageously reduce the load, provide a redundant path, reduce structural fatigue, and/or increase fatigue life.

  12. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  13. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  14. Detection, Location, and Characterization of Hydroacoustic Signals Using Seafloor Cable Networks Offshore Japan

    NASA Astrophysics Data System (ADS)

    Suyehiro, K.; Sugioka, H.; Watanabe, T.

    2008-12-01

    The hydroacoustic monitoring by the International Monitoring System for CTBT (Comprehensive Nuclear- Test-Ban Treaty) verification system utilizes hydrophone stations (6) and seismic stations (5 and called T- phase stations) for worldwide detection. Some conspicuous signals of natural origin include those from earthquakes, volcanic eruptions, or whale calls. Among artificial sources are non-nuclear explosions and airgun shots. It is important for the IMS system to detect and locate hydroacoustic events with sufficient accuracy and correctly characterize the signals and identify the source. As there are a number of seafloor cable networks operated offshore Japanese islands basically facing the Pacific Ocean for monitoring regional seismicity, the data from these stations (pressure and seismic sensors) may be utilized to increase the capability of IMS. We use these data to compare some selected event parameters with those by IMS. In particular, there have been several unconventional acoustic signals in the western Pacific,which were also captured by IMS hydrophones across the Pacific in the time period of 2007-present. These anomalous examples and also dynamite shots used for seismic crustal structure studies and other natural sources will be presented in order to help improve the IMS verification capabilities for detection, location and characterization of anomalous signals.

  15. Wireless Metal Detection and Surface Coverage Sensing for All-Surface Induction Heating

    PubMed Central

    Kilic, Veli Tayfun; Unal, Emre; Demir, Hilmi Volkan

    2016-01-01

    All-surface induction heating systems, typically comprising small-area coils, face a major challenge in detecting the presence of a metallic vessel and identifying its partial surface coverage over the coils to determine which of the coils to power up. The difficulty arises due to the fact that the user can heat vessels made of a wide variety of metals (and their alloys). To address this problem, we propose and demonstrate a new wireless detection methodology that allows for detecting the presence of metallic vessels together with uniquely sensing their surface coverages while also identifying their effective material type in all-surface induction heating systems. The proposed method is based on telemetrically measuring simultaneously inductance and resistance of the induction coil coupled with the vessel in the heating system. Here, variations in the inductance and resistance values for an all-surface heating coil loaded by vessels (made of stainless steel and aluminum) at different positions were systematically investigated at different frequencies. Results show that, independent of the metal material type, unique identification of the surface coverage is possible at all freqeuncies. Additionally, using the magnitude and phase information extracted from the coupled coil impedance, unique identification of the vessel effective material is also achievable, this time independent of its surface coverage. PMID:26978367

  16. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  17. Neurotechnology for intelligence analysts

    NASA Astrophysics Data System (ADS)

    Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.

    2006-05-01

    Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.

  18. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    NASA Astrophysics Data System (ADS)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  19. Molecular Detection of Vertebrates in Stream Water: A Demonstration Using Rocky Mountain Tailed Frogs and Idaho Giant Salamanders

    PubMed Central

    Goldberg, Caren S.; Pilliod, David S.; Arkle, Robert S.; Waits, Lisette P.

    2011-01-01

    Stream ecosystems harbor many secretive and imperiled species, and studies of vertebrates in these systems face the challenges of relatively low detection rates and high costs. Environmental DNA (eDNA) has recently been confirmed as a sensitive and efficient tool for documenting aquatic vertebrates in wetlands and in a large river and canal system. However, it was unclear whether this tool could be used to detect low-density vertebrates in fast-moving streams where shed cells may travel rapidly away from their source. To evaluate the potential utility of eDNA techniques in stream systems, we designed targeted primers to amplify a short, species-specific DNA fragment for two secretive stream amphibian species in the northwestern region of the United States (Rocky Mountain tailed frogs, Ascaphus montanus, and Idaho giant salamanders, Dicamptodon aterrimus). We tested three DNA extraction and five PCR protocols to determine whether we could detect eDNA of these species in filtered water samples from five streams with varying densities of these species in central Idaho, USA. We successfully amplified and sequenced the targeted DNA regions for both species from stream water filter samples. We detected Idaho giant salamanders in all samples and Rocky Mountain tailed frogs in four of five streams and found some indication that these species are more difficult to detect using eDNA in early spring than in early fall. While the sensitivity of this method across taxa remains to be determined, the use of eDNA could revolutionize surveys for rare and invasive stream species. With this study, the utility of eDNA techniques for detecting aquatic vertebrates has been demonstrated across the majority of freshwater systems, setting the stage for an innovative transformation in approaches for aquatic research. PMID:21818382

  20. Molecular detection of vertebrates in stream water: a demonstration using Rocky Mountain tailed frogs and Idaho giant salamanders.

    PubMed

    Goldberg, Caren S; Pilliod, David S; Arkle, Robert S; Waits, Lisette P

    2011-01-01

    Stream ecosystems harbor many secretive and imperiled species, and studies of vertebrates in these systems face the challenges of relatively low detection rates and high costs. Environmental DNA (eDNA) has recently been confirmed as a sensitive and efficient tool for documenting aquatic vertebrates in wetlands and in a large river and canal system. However, it was unclear whether this tool could be used to detect low-density vertebrates in fast-moving streams where shed cells may travel rapidly away from their source. To evaluate the potential utility of eDNA techniques in stream systems, we designed targeted primers to amplify a short, species-specific DNA fragment for two secretive stream amphibian species in the northwestern region of the United States (Rocky Mountain tailed frogs, Ascaphus montanus, and Idaho giant salamanders, Dicamptodon aterrimus). We tested three DNA extraction and five PCR protocols to determine whether we could detect eDNA of these species in filtered water samples from five streams with varying densities of these species in central Idaho, USA. We successfully amplified and sequenced the targeted DNA regions for both species from stream water filter samples. We detected Idaho giant salamanders in all samples and Rocky Mountain tailed frogs in four of five streams and found some indication that these species are more difficult to detect using eDNA in early spring than in early fall. While the sensitivity of this method across taxa remains to be determined, the use of eDNA could revolutionize surveys for rare and invasive stream species. With this study, the utility of eDNA techniques for detecting aquatic vertebrates has been demonstrated across the majority of freshwater systems, setting the stage for an innovative transformation in approaches for aquatic research.

  1. Molecular detection of vertebrates in stream water: A demonstration using rocky mountain tailed frogs and Idaho giant salamanders

    USGS Publications Warehouse

    Goldberg, C.S.; Pilliod, D.S.; Arkle, R.S.; Waits, L.P.

    2011-01-01

    Stream ecosystems harbor many secretive and imperiled species, and studies of vertebrates in these systems face the challenges of relatively low detection rates and high costs. Environmental DNA (eDNA) has recently been confirmed as a sensitive and efficient tool for documenting aquatic vertebrates in wetlands and in a large river and canal system. However, it was unclear whether this tool could be used to detect low-density vertebrates in fast-moving streams where shed cells may travel rapidly away from their source. To evaluate the potential utility of eDNA techniques in stream systems, we designed targeted primers to amplify a short, species-specific DNA fragment for two secretive stream amphibian species in the northwestern region of the United States (Rocky Mountain tailed frogs, Ascaphus montanus, and Idaho giant salamanders, Dicamptodon aterrimus). We tested three DNA extraction and five PCR protocols to determine whether we could detect eDNA of these species in filtered water samples from five streams with varying densities of these species in central Idaho, USA. We successfully amplified and sequenced the targeted DNA regions for both species from stream water filter samples. We detected Idaho giant salamanders in all samples and Rocky Mountain tailed frogs in four of five streams and found some indication that these species are more difficult to detect using eDNA in early spring than in early fall. While the sensitivity of this method across taxa remains to be determined, the use of eDNA could revolutionize surveys for rare and invasive stream species. With this study, the utility of eDNA techniques for detecting aquatic vertebrates has been demonstrated across the majority of freshwater systems, setting the stage for an innovative transformation in approaches for aquatic research.

  2. Colorectal anastomotic leakage: aspects of prevention, detection and treatment.

    PubMed

    Daams, Freek; Luyer, Misha; Lange, Johan F

    2013-04-21

    All colorectal surgeons are faced from time to time with anastomotic leakage after colorectal surgery. This complication has been studied extensively without a significant reduction of incidence over the last 30 years. New techniques of prevention, by innovative anastomotic techniques should improve results in the future, but standardization and "teachability" should be guaranteed. Risk scoring enables intra-operative decision-making whether to restore continuity or deviate. Early detection can lead to reduction in delay of diagnosis as long as a standard system is used. For treatment options, no firm evidence is available, but future studies could focus on repair and saving of the anastomosis on the one hand or anastomotical breakdown and definitive colostomy on the other hand.

  3. Target detection portal

    DOEpatents

    Linker, Kevin L.; Brusseau, Charles A.

    2002-01-01

    A portal apparatus for screening persons or objects for the presence of trace amounts of target substances such as explosives, narcotics, radioactive materials, and certain chemical materials. The portal apparatus can have a one-sided exhaust for an exhaust stream, an interior wall configuration with a concave-shape across a horizontal cross-section for each of two facing sides to result in improved airflow and reduced washout relative to a configuration with substantially flat parallel sides; air curtains to reduce washout; ionizing sprays to collect particles bound by static forces, as well as gas jet nozzles to dislodge particles bound by adhesion to the screened person or object. The portal apparatus can be included in a detection system with a preconcentrator and a detector.

  4. Computer-Aided Diagnosis Systems for Lung Cancer: Challenges and Methodologies

    PubMed Central

    El-Baz, Ayman; Beache, Garth M.; Gimel'farb, Georgy; Suzuki, Kenji; Okada, Kazunori; Elnakib, Ahmed; Soliman, Ahmed; Abdollahi, Behnoush

    2013-01-01

    This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems. PMID:23431282

  5. Mechanisms of face perception

    PubMed Central

    Tsao, Doris Y.

    2009-01-01

    Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862

  6. Functional Polymers in Protein Detection Platforms: Optical, Electrochemical, Electrical, Mass-Sensitive, and Magnetic Biosensors

    PubMed Central

    Hahm, Jong-in

    2011-01-01

    The rapidly growing field of proteomics and related applied sectors in the life sciences demands convenient methodologies for detecting and measuring the levels of specific proteins as well as for screening and analyzing for interacting protein systems. Materials utilized for such protein detection and measurement platforms should meet particular specifications which include ease-of-mass manufacture, biological stability, chemical functionality, cost effectiveness, and portability. Polymers can satisfy many of these requirements and are often considered as choice materials in various biological detection platforms. Therefore, tremendous research efforts have been made for developing new polymers both in macroscopic and nanoscopic length scales as well as applying existing polymeric materials for protein measurements. In this review article, both conventional and alternative techniques for protein detection are overviewed while focusing on the use of various polymeric materials in different protein sensing technologies. Among many available detection mechanisms, most common approaches such as optical, electrochemical, electrical, mass-sensitive, and magnetic methods are comprehensively discussed in this article. Desired properties of polymers exploited for each type of protein detection approach are summarized. Current challenges associated with the application of polymeric materials are examined in each protein detection category. Difficulties facing both quantitative and qualitative protein measurements are also identified. The latest efforts on the development and evaluation of nanoscale polymeric systems for improved protein detection are also discussed from the standpoint of quantitative and qualitative measurements. Finally, future research directions towards further advancements in the field are considered. PMID:21691441

  7. You may look unhappy unless you smile: the distinctiveness of a smiling face against faces without an explicit smile.

    PubMed

    Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok

    2015-05-01

    An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  9. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  10. Cancer care management through a mobile phone health approach: key considerations.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza; Rahimi, Azin

    2013-01-01

    Greater use of mobile phone devices seems inevitable because the health industry and cancer care are facing challenges such as resource constraints, rising care costs, the need for immediate access to healthcare data of types such as audio video texts for early detection and treatment of patients and increasing remote aids in telemedicine. Physicians, in order to study the causes of cancer, detect cancer earlier, act in prevention measures, determine the effectiveness of treatment and specify the reasons for the treatment ineffectiveness, need to access accurate, comprehensive and timely cancer data. Mobile devices provide opportunities and can play an important role in consulting, diagnosis, treatment, and quick access to health information. There easy carriage make them perfect tools for healthcare providers in cancer care management. Key factors in cancer care management systems through a mobile phone health approach must be considered such as human resources, confidentiality and privacy, legal and ethical issues, appropriate ICT and provider infrastructure and costs in general aspects and interoperability, human relationships, types of mobile devices and telecommunication related points in specific aspects. The successful implementation of mobile-based systems in cancer care management will constantly face many challenges. Hence, in applying mobile cancer care, involvement of users and considering their needs in all phases of project, providing adequate bandwidth, preparation of standard tools that provide maximum mobility and flexibility for users, decreasing obstacles to interrupt network communications, and using suitable communication protocols are essential. It is obvious that identifying and reducing barriers and strengthening the positive points will have a significant role in appropriate planning and promoting the achievements of mobile cancer care systems. The aim of this article is to explain key points which should be considered in designing appropriate mobile health systems in cancer care as an approach for improving cancer care management.

  11. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  12. Potentials of Optical Damage Assessment Techniques in Automotive Crash-Concepts composed of FRP-Steel Hybrid Material Systems

    NASA Astrophysics Data System (ADS)

    Dlugosch, M.; Spiegelhalter, B.; Soot, T.; Lukaszewicz, D.; Fritsch, J.; Hiermaier, S.

    2017-05-01

    With car manufacturers simultaneously facing increasing passive safety and efficiency requirements, FRP-metal hybrid material systems are one way to design lightweight and crashworthy vehicle structures. Generic automotive hybrid structural concepts have been tested under crash loading conditions. In order to assess the state of overall damage and structural integrity, and primarily to validate simulation data, several NDT techniques have been assessed regarding their potential to detect common damage mechanisms in such hybrid systems. Significant potentials were found particularly in combining 3D-topography laser scanning and X-Ray imaging results. Ultrasonic testing proved to be limited by the signal coupling quality on damaged or curved surfaces.

  13. Some emerging applications of lasers

    NASA Astrophysics Data System (ADS)

    Christensen, C. P.

    1982-10-01

    Applications of lasers in photochemistry, advanced instrumentation, and information storage are discussed. Laser microchemistry offers a number of new methods for altering the morphology of a solid surface with high spatial resolution. Recent experiments in material deposition, material removal, and alloying and doping are reviewed. A basic optical disk storage system is described and the problems faced by this application are discussed, in particular those pertaining to recording media. An advanced erasable system based on the magnetooptic effect is described. Applications of lasers for remote sensing are discussed, including various lidar systems, the use of laser-induced fluorescence for oil spill characterization and uranium exploration, and the use of differential absorption for detection of atmospheric constituents, temperature, and humidity.

  14. A concept of a space hazard counteraction system: Astronomical aspects

    NASA Astrophysics Data System (ADS)

    Shustov, B. M.; Rykhlova, L. V.; Kuleshov, Yu. P.; Dubov, Yu. N.; Elkin, K. S.; Veniaminov, S. S.; Borovin, G. K.; Molotov, I. E.; Naroenkov, S. A.; Barabanov, S. I.; Emel'yanenko, V. V.; Devyatkin, A. V.; Medvedev, Yu. D.; Shor, V. A.; Kholshevnikov, K. V.

    2013-07-01

    The basic science of astronomy and, primarily, its branch responsible for studying the Solar System, face the most important practical task posed by nature and the development of human civilization—to study space hazards and to seek methods of counteracting them. In pursuance of the joint Resolution of the Federal Space Agency (Roscosmos) and the RAS (Russian Academy of Sciences) Space Council of June 23, 2010, the RAS Institute of Astronomy in collaboration with other scientific and industrial organizations prepared a draft concept of the federal-level program targeted at creating a system of space hazard detection and counteraction. The main ideas and astronomical content of the concept are considered in this article.

  15. Knife blade as a facial foreign body.

    PubMed

    Gardner, P A; Righi, P; Shahbahrami, P B

    1997-08-01

    This case demonstrates the unpredictability of foreign bodies in the face. The retained knife blade eluded detection on two separate examinations. The essential components to making a correct diagnosis of a foreign body following a stabbing to the face include a thorough review of the mechanism of injury, a complete head and neck examination, a high index of suspicion, and plain radiographs of the face.

  16. Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans

    NASA Astrophysics Data System (ADS)

    Ramachandran S., Sindhu; George, Jose; Skaria, Shibon; V. V., Varun

    2018-02-01

    Lung cancer is the leading cause of cancer related deaths in the world. The survival rate can be improved if the presence of lung nodules are detected early. This has also led to more focus being given to computer aided detection (CAD) and diagnosis of lung nodules. The arbitrariness of shape, size and texture of lung nodules is a challenge to be faced when developing these detection systems. In the proposed work we use convolutional neural networks to learn the features for nodule detection, replacing the traditional method of handcrafting features like geometric shape or texture. Our network uses the DetectNet architecture based on YOLO (You Only Look Once) to detect the nodules in CT scans of lung. In this architecture, object detection is treated as a regression problem with a single convolutional network simultaneously predicting multiple bounding boxes and class probabilities for those boxes. By performing training using chest CT scans from Lung Image Database Consortium (LIDC), NVIDIA DIGITS and Caffe deep learning framework, we show that nodule detection using this single neural network can result in reasonably low false positive rates with high sensitivity and precision.

  17. Cost-efficient speckle interferometry with plastic optical fiber for unobtrusive monitoring of human vital signs.

    PubMed

    Podbreznik, Peter; Đonlagić, Denis; Lešnik, Dejan; Cigale, Boris; Zazula, Damjan

    2013-10-01

    A cost-efficient plastic optical fiber (POF) system for unobtrusive monitoring of human vital signs is presented. The system is based on speckle interferometry. A laser diode is butt-coupled to the POF whose exit face projects speckle patterns onto a linear optical sensor array. Sequences of acquired speckle images are transformed into one-dimensional signals by using the phase-shifting method. The signals are analyzed by band-pass filtering and a Morlet-wavelet-based multiresolutional approach for the detection of cardiac and respiratory activities, respectively. The system is tested with 10 healthy nonhospitalized persons, lying supine on a mattress with the embedded POF. Experimental results are assessed statistically: precisions of 98.8% ± 1.5% and 97.9% ± 2.3%, sensitivities of 99.4% ± 0.6% and 95.3% ± 3%, and mean delays between interferometric detections and corresponding referential signals of 116.6 ± 55.5 and 1299.2 ± 437.3 ms for the heartbeat and respiration are obtained, respectively.

  18. Appearance-Based Vision and the Automatic Generation of Object Recognition Programs

    DTIC Science & Technology

    1992-07-01

    q u a groued into equivalence clases with respect o visible featms; the equivalence classes me called alpecu. A recognitio smuegy is generated from...illustates th concept. pge 9 Table 1: Summary o fSnsors Samr Vertex Edge Face Active/ Passive Edge detector line, passive Shape-fzm-shading - r passive...example of the detectability computation for a liht-stripe range finder is shown zn Fqgur 2. Figure 2: Detectability of a face for a light-stripe range

  19. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Monitoring robot actions for error detection and recovery

    NASA Technical Reports Server (NTRS)

    Gini, M.; Smith, R.

    1987-01-01

    Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.

  1. Explosives screening on a vehicle surface

    DOEpatents

    Parmeter, John E.; Brusseau, Charles A.; Davis, Jerry D.; Linker, Kevin L.; Hannum, David W.

    2005-02-01

    A system for detecting particles on the outer surface of a vehicle has a housing capable of being placed in a test position adjacent to, but not in contact with, a portion of the outer surface of the vehicle. An elongate sealing member is fastened to the housing along a perimeter surrounding the wall, and the elongate sealing member has a contact surface facing away from the wall to contact the outer surface of the vehicle to define a test volume when the wall is in the test position. A gas flow system has at least one gas inlet extending through the wall for providing a gas stream against the surface of the vehicle within the test volume. This gas stream, which preferably is air, dislodges particles from the surface of the vehicle covered by the housing. The gas stream exits the test volume through a gas outlet and particles in the stream are detected.

  2. Is the Face-Perception System Human-Specific at Birth?

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Leo, Irene; Pascalis, Olivier; Simion, Francesca

    2012-01-01

    The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns…

  3. Standardization of the face-hand test in a Brazilian multicultural population: prevalence of sensory extinction and implications for neurological diagnosis.

    PubMed

    Luvizutto, Gustavo José; Fogaroli, Marcelo Ortolani; Theotonio, Rodolfo Mazeto; Nunes, Hélio Rubens de Carvalho; Resende, Luiz Antônio de Lima; Bazan, Rodrigo

    2016-12-01

    The face-hand test is a simple, practical, and rapid test to detect neurological syndromes. However, it has not previously been assessed in a Brazilian sample; therefore, the objective of the present study was to standardize the face-hand test for use in the multi-cultural population of Brazil and identify the sociodemographic factors affecting the results. This was a cross sectional study of 150 individuals. The sociodemographic variables that were collected included age, gender, race, body mass index and years of education. Standardization of the face-hand test occurred in 2 rounds of 10 sensory stimuli, with the participant seated to support the trunk and their vision obstructed in a sound-controlled environment. The face-hand test was conducted by applying 2 rounds of 10 sensory stimuli that were applied to the face and hand simultaneously. The associations between the face-hand test and sociodemographic variables were analyzed using Mann-Whitney tests and Spearman correlations. Binomial models were adjusted for the number of face-hand test variations, and ROC curves evaluated sensitivity and specificity of sensory extinction. There was no significant relationship between the sociodemographic variables and the number of stimuli perceived for the face-hand test. There was a high relative frequency of detection, 8 out of 10 stimuli, in this population. Sensory extinction was 25.3%, which increased with increasing age (OR=1.4[1:01-1:07]; p=0.006) and decreased significantly with increasing education (OR=0.82[0.71-0.94]; p=0.005). In the Brazilian population, a normal face-hand test score ranges between 8-10 stimuli, and the results indicate that sensory extinction is associated with increased age and lower levels of education.

  4. Performance Evaluation of High Speed Multicarrier System for Optical Wireless Communication

    NASA Astrophysics Data System (ADS)

    Mathur, Harshita; Deepa, T.; Bartalwar, Sophiya

    2018-04-01

    Optical wireless communication (OWC) in the infrared and visible range is quite impressive solution, especially where radio communication face challenges. Visible light communication (VLC) uses visible light over a range of 400 and 800 THz and is a subdivision of OWC technologies. With an increasing demand for use of wireless communications, wireless access via Wi-Fi is facing many challenges especially in terms of capacity, availability, security and efficiency. VLC uses intensity modulation and direct detection (IM/DD) techniques and hence they require the signals to certainly be real valued positive sequences. These constraints pose limitation on digital modulation techniques. These limitations result in spectrum-efficiency or power-efficiency losses. In this paper, we investigate an amplitude shift keying (ASK) based orthogonal frequency division multiplexing (OFDM) signal transmission scheme using LabVIEW for VLC technology.

  5. Visual mismatch negativity and categorization.

    PubMed

    Czigler, István

    2014-07-01

    Visual mismatch negativity (vMMN) component of event-related potentials is elicited by stimuli violating the category rule of stimulus sequences, even if such stimuli are outside the focus of attention. Category-related vMMN emerges to colors, and color-related vMMN is sensitive to language-related effects. A higher-order perceptual category, bilateral symmetry is also represented in the memory processes underlying vMMN. As a relatively large body of research shows, violating the emotional category of human faces elicits vMMN. Another face-related category sensitive to the violation of regular presentation is gender. Finally, vMMN was elicited to the laterality of hands. As results on category-related vMMN show, stimulus representation in the non-conscious change detection system is fairly complex, and it is not restricted to the registration of elementary perceptual regularities.

  6. Feasibility of Using Wideband Microwave System for Non-Invasive Detection and Monitoring of Pulmonary Oedema

    NASA Astrophysics Data System (ADS)

    Rezaeieh, S. Ahdi; Zamani, A.; Bialkowski, K. S.; Mahmoud, A.; Abbosh, A. M.

    2015-09-01

    Pulmonary oedema is a common manifestation of various fatal diseases that can be caused by cardiac or non-cardiac syndromes. The accumulated fluid has a considerably higher dielectric constant compared to lungs’ tissues, and can thus be detected using microwave techniques. Therefore, a non-invasive microwave system for the early detection of pulmonary oedema is presented. It employs a platform in the form of foam-based bed that contains two linear arrays of wideband antennas covering the band 0.7-1 GHz. The platform is designed such that during the tests, the subject lays on the bed with the back of the torso facing the antenna arrays. The antennas are controlled using a switching network that is connected to a compact network analyzer. A novel frequency-based imaging algorithm is used to process the recorded signals and generate an image of the torso showing any accumulated fluids in the lungs. The system is verified on an artificial torso phantom, and animal organs. As a feasibility study, preclinical tests are conducted on healthy subjects to determinate the type of obtained images, the statistics and threshold levels of their intensity to differentiate between healthy and unhealthy subjects.

  7. Feasibility of Using Wideband Microwave System for Non-Invasive Detection and Monitoring of Pulmonary Oedema

    PubMed Central

    Rezaeieh, S. Ahdi; Zamani, A.; Bialkowski, K. S.; Mahmoud, A.; Abbosh, A. M.

    2015-01-01

    Pulmonary oedema is a common manifestation of various fatal diseases that can be caused by cardiac or non-cardiac syndromes. The accumulated fluid has a considerably higher dielectric constant compared to lungs’ tissues, and can thus be detected using microwave techniques. Therefore, a non-invasive microwave system for the early detection of pulmonary oedema is presented. It employs a platform in the form of foam-based bed that contains two linear arrays of wideband antennas covering the band 0.7–1 GHz. The platform is designed such that during the tests, the subject lays on the bed with the back of the torso facing the antenna arrays. The antennas are controlled using a switching network that is connected to a compact network analyzer. A novel frequency-based imaging algorithm is used to process the recorded signals and generate an image of the torso showing any accumulated fluids in the lungs. The system is verified on an artificial torso phantom, and animal organs. As a feasibility study, preclinical tests are conducted on healthy subjects to determinate the type of obtained images, the statistics and threshold levels of their intensity to differentiate between healthy and unhealthy subjects. PMID:26365299

  8. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  9. The Role of Baseline Vagal Tone in Dealing with a Stressor during Face to Face and Computer-Based Social Interactions.

    PubMed

    Rigoni, Daniele; Morganti, Francesca; Braibanti, Paride

    2017-01-01

    Facing a stressor involves a cardiac vagal tone response and a feedback effect produced by social interaction in visceral regulation. This study evaluated the contribution of baseline vagal tone and of social engagement system (SES) functioning on the ability to deal with a stressor. Participants ( n = 70) were grouped into a minimized social interaction condition (procedure administered through a PC) and a social interaction condition (procedure administered by an experimenter). The State Trait Anxiety Inventory, the Social Interaction Anxiety Scale, the Emotion Regulation Questionnaire and a debriefing questionnaire were completed by the subjects. The baseline vagal tone was registered during the baseline, stressor and recovery phases. The collected results highlighted a significant effect of the baseline vagal tone on vagal suppression. No effect of minimized vs. social interaction conditions on cardiac vagal tone during stressor and recovery phases was detected. Cardiac vagal tone and the results of the questionnaires appear to be not correlated. The study highlighted the main role of baseline vagal tone on visceral regulation. Some remarks on SES to be deepen in further research were raised.

  10. Fiber-coupled superconducting nanowire single-photon detectors integrated with a bandpass filter on the fiber end-face

    NASA Astrophysics Data System (ADS)

    Zhang, W. J.; Yang, X. Y.; Li, H.; You, L. X.; Lv, C. L.; Zhang, L.; Zhang, C. J.; Liu, X. Y.; Wang, Z.; Xie, X. M.

    2018-07-01

    Superconducting nanowire single-photon detectors (SNSPDs) with both high system detection efficiency (SDE) and low dark count rate (DCR) play significant roles in quantum information processes and various applications. The background dark counts of SNSPDs originate from the room temperature blackbody radiation coupled to the device via a fiber. Therefore, a bandpass filter (BPF) operated at low temperature with minimal insert loss is necessary to suppress the background DCR. Herein, a low-loss BPF integrated on a single-mode fiber end-face was designed, fabricated and verified for the low temperature implement. The fiber end-face BPF was featured with a typical passband width about 40 nm in the 1550 nm telecom band and a peak transmittance of over 0.98. SNSPD with high SDE fabricated on a distributed Bragg reflector was coupled to the BPF. The device with such a BPF showed an SDE of 80% at a DCR of 0.5 Hz, measured at 2.1 K. Compared the same device without a BPF, the DCR was reduced by over 13 dB with an SDE decrease of <3%.

  11. Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.

    PubMed

    Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon

    2006-06-01

    Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.

  12. Innovation in weight loss programs: a 3-dimensional virtual-world approach.

    PubMed

    Johnston, Jeanne D; Massey, Anne P; Devaneaux, Celeste A

    2012-09-20

    The rising trend in obesity calls for innovative weight loss programs. While behavioral-based face-to-face programs have proven to be the most effective, they are expensive and often inaccessible. Internet or Web-based weight loss programs have expanded reach but may lack qualities critical to weight loss and maintenance such as human interaction, social support, and engagement. In contrast to Web technologies, virtual reality technologies offer unique affordances as a behavioral intervention by directly supporting engagement and active learning. To explore the effectiveness of a virtual-world weight loss program relative to weight loss and behavior change. We collected data from overweight people (N = 54) participating in a face-to-face or a virtual-world weight loss program. Weight, body mass index (BMI), percentage weight change, and health behaviors (ie, weight loss self-efficacy, physical activity self-efficacy, self-reported physical activity, and fruit and vegetable consumption) were assessed before and after the 12-week program. Repeated measures analysis was used to detect differences between groups and across time. A total of 54 participants with a BMI of 32 (SD 6.05) kg/m(2)enrolled in the study, with a 13% dropout rate for each group (virtual world group: 5/38; face-to-face group: 3/24). Both groups lost a significant amount of weight (virtual world: 3.9 kg, P < .001; face-to-face: 2.8 kg, P = .002); however, no significant differences between groups were detected (P = .29). Compared with baseline, the virtual-world group lost an average of 4.2%, with 33% (11/33) of the participants losing a clinically significant (≥5%) amount of baseline weight. The face-to-face group lost an average of 3.0% of their baseline weight, with 29% (6/21) losing a clinically significant amount. We detected a significant group × time interaction for moderate (P = .006) and vigorous physical activity (P = .008), physical activity self-efficacy (P = .04), fruit and vegetable consumption (P = .007), and weight loss self-efficacy (P < .001). Post hoc paired t tests indicated significant improvements across all of the variables for the virtual-world group. Overall, these results offer positive early evidence that a virtual-world-based weight loss program can be as effective as a face-to-face one relative to biometric changes. In addition, our results suggest that a virtual world may be a more effective platform to influence meaningful behavioral changes and improve self-efficacy.

  13. Innovation in Weight Loss Programs: A 3-Dimensional Virtual-World Approach

    PubMed Central

    Massey, Anne P; DeVaneaux, Celeste A

    2012-01-01

    Background The rising trend in obesity calls for innovative weight loss programs. While behavioral-based face-to-face programs have proven to be the most effective, they are expensive and often inaccessible. Internet or Web-based weight loss programs have expanded reach but may lack qualities critical to weight loss and maintenance such as human interaction, social support, and engagement. In contrast to Web technologies, virtual reality technologies offer unique affordances as a behavioral intervention by directly supporting engagement and active learning. Objective To explore the effectiveness of a virtual-world weight loss program relative to weight loss and behavior change. Methods We collected data from overweight people (N = 54) participating in a face-to-face or a virtual-world weight loss program. Weight, body mass index (BMI), percentage weight change, and health behaviors (ie, weight loss self-efficacy, physical activity self-efficacy, self-reported physical activity, and fruit and vegetable consumption) were assessed before and after the 12-week program. Repeated measures analysis was used to detect differences between groups and across time. Results A total of 54 participants with a BMI of 32 (SD 6.05) kg/m2 enrolled in the study, with a 13% dropout rate for each group (virtual world group: 5/38; face-to-face group: 3/24). Both groups lost a significant amount of weight (virtual world: 3.9 kg, P < .001; face-to-face: 2.8 kg, P = .002); however, no significant differences between groups were detected (P = .29). Compared with baseline, the virtual-world group lost an average of 4.2%, with 33% (11/33) of the participants losing a clinically significant (≥5%) amount of baseline weight. The face-to-face group lost an average of 3.0% of their baseline weight, with 29% (6/21) losing a clinically significant amount. We detected a significant group × time interaction for moderate (P = .006) and vigorous physical activity (P = .008), physical activity self-efficacy (P = .04), fruit and vegetable consumption (P = .007), and weight loss self-efficacy (P < .001). Post hoc paired t tests indicated significant improvements across all of the variables for the virtual-world group. Conclusions Overall, these results offer positive early evidence that a virtual-world-based weight loss program can be as effective as a face-to-face one relative to biometric changes. In addition, our results suggest that a virtual world may be a more effective platform to influence meaningful behavioral changes and improve self-efficacy. PMID:22995535

  14. A ubiquitous and low-cost solution for movement monitoring and accident detection based on sensor fusion.

    PubMed

    Felisberto, Filipe; Fdez-Riverola, Florentino; Pereira, António

    2014-05-21

    The low average birth rate in developed countries and the increase in life expectancy have lead society to face for the first time an ageing situation. This situation associated with the World's economic crisis (which started in 2008) forces the need of equating better and more efficient ways of providing more quality of life for the elderly. In this context, the solution presented in this work proposes to tackle the problem of monitoring the elderly in a way that is not restrictive for the life of the monitored, avoiding the need for premature nursing home admissions. To this end, the system uses the fusion of sensory data provided by a network of wireless sensors placed on the periphery of the user. Our approach was also designed with a low-cost deployment in mind, so that the target group may be as wide as possible. Regarding the detection of long-term problems, the tests conducted showed that the precision of the system in identifying and discerning body postures and body movements allows for a valid monitorization and rehabilitation of the user. Moreover, concerning the detection of accidents, while the proposed solution presented a near 100% precision at detecting normal falls, the detection of more complex falls (i.e., hampered falls) will require further study.

  15. In situ high temperature microwave microscope for nondestructive detection of surface and sub-surface defects.

    PubMed

    Wang, Peiyu; Li, Zhencheng; Pei, Yongmao

    2018-04-16

    An in situ high temperature microwave microscope was built for detecting surface and sub-subsurface structures and defects. This system was heated with a self-designed quartz lamp radiation module, which is capable of heating to 800°C. A line scanning of a metal grating showed a super resolution of 0.5 mm (λ/600) at 1 GHz. In situ scanning detections of surface hole defects on an aluminium plate and a glass fiber reinforced plastic (GFRP) plate were conducted at different high temperatures. A post processing algorithm was proposed to remove the background noises induced by high temperatures and the 3.0 mm-spaced hole defects were clearly resolved. Besides, hexagonal honeycomb lattices were in situ detected and clearly resolved under a 1.0 mm-thick face panel at 20°C and 50°C, respectively. The core wall positions and bonding width were accurately detected and evaluated. In summary, this in situ microwave microscope is feasible and effective in sub-surface detection and super resolution imaging at different high temperatures.

  16. A Cabled Acoustic Telemetry System for Detecting and Tracking Juvenile Salmon: Part 2. Three-Dimensional Tracking and Passage Outcomes

    PubMed Central

    Deng, Z. Daniel; Weiland, Mark A.; Fu, Tao; Seim, Tom A.; LaMarche, Brian L.; Choi, Eric Y.; Carlson, Thomas J.; Eppard, M. Brad

    2011-01-01

    In Part 1 of this paper, we presented the engineering design and instrumentation of the Juvenile Salmon Acoustic Telemetry System (JSATS) cabled system, a nonproprietary sensing technology developed by the U.S. Army Corps of Engineers, Portland District (Oregon, USA) to meet the needs for monitoring the survival of juvenile salmonids through the hydroelectric facilities within the Federal Columbia River Power System. Here in Part 2, we describe how the JSATS cabled system was employed as a reference sensor network for detecting and tracking juvenile salmon. Time-of-arrival data for valid detections on four hydrophones were used to solve for the three-dimensional (3D) position of fish surgically implanted with JSATS acoustic transmitters. Validation tests demonstrated high accuracy of 3D tracking up to 100 m upstream from the John Day Dam spillway. The along-dam component, used for assigning the route of fish passage, had the highest accuracy; the median errors ranged from 0.02 to 0.22 m, and root mean square errors ranged from 0.07 to 0.56 m at distances up to 100 m. For the 2008 case study at John Day Dam, the range for 3D tracking was more than 100 m upstream of the dam face where hydrophones were deployed, and detection and tracking probabilities of fish tagged with JSATS acoustic transmitters were higher than 98%. JSATS cabled systems have been successfully deployed on several major dams to acquire information for salmon protection and for development of more “fish-friendly” hydroelectric facilities. PMID:22163919

  17. A Cabled Acoustic Telemetry System for Detecting and Tracking Juvenile Salmon: Part 2. Three-Dimensional Tracking and Passage Outcomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Zhiqun; Weiland, Mark A.; Fu, Tao

    2011-05-26

    In Part 1 of this paper [1], we presented the engineering design and instrumentation of the Juvenile Salmon Acoustic Telemetry System (JSATS) cabled system, a nonproprietary technology developed by the U.S. Army Corps of Engineers, Portland District, to meet the needs for monitoring the survival of juvenile salmonids through the 31 dams in the Federal Columbia River Power System. Here in Part 2, we describe how the JSATS cabled system was employed as a reference sensor network for detecting and tracking juvenile salmon. Time-of-arrival data for valid detections on four hydrophones were used to solve for the three-dimensional (3D) positionmore » of fish surgically implanted with JSATS acoustic transmitters. Validation tests demonstrated high accuracy of 3D tracking up to 100 m from the John Day Dam spillway. The along-dam component, used for assigning the route of fish passage, had the highest accuracy; the median errors ranged from 0.06 to 0.22 m, and root mean square errors ranged from 0.05 to 0.56 m at distances up to 100 m. For the case study at John Day Dam during 2008, the range for 3D tracking was more than 100 m upstream of the dam face where hydrophones were deployed, and detection and tracking probabilities of fish tagged with JSATS acoustic transmitters were higher than 98%. JSATS cabled systems have been successfully deployed on several major dams to acquire information for salmon protection and for development of more “fish-friendly” hydroelectric facilities.« less

  18. 47 CFR 90.677 - Reconfiguration of the 806-824/851-869 MHz band in order to separate cellular systems from non...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... relocation agreement. Sprint Nextel and relocating incumbents may agree to conduct face-to-face negotiations...-55. Sprint Nextel and relocating incumbents may agree to conduct face-to-face negotiations or either... in order to separate cellular systems from non-cellular systems. 90.677 Section 90.677...

  19. Evaluation of a processing scheme for calcified atheromatous carotid artery detection in face/neck CBCT images

    NASA Astrophysics Data System (ADS)

    Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.

    2017-03-01

    Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.

  20. Interactive optical panel

    DOEpatents

    Veligdan, J.T.

    1995-10-03

    An interactive optical panel assembly includes an optical panel having a plurality of ribbon optical waveguides stacked together with opposite ends thereof defining panel first and second faces. A light source provides an image beam to the panel first face for being channeled through the waveguides and emitted from the panel second face in the form of a viewable light image. A remote device produces a response beam over a discrete selection area of the panel second face for being channeled through at least one of the waveguides toward the panel first face. A light sensor is disposed across a plurality of the waveguides for detecting the response beam therein for providing interactive capability. 10 figs.

  1. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  2. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  3. The modular nature of trustworthiness detection.

    PubMed

    Bonnefon, Jean-François; Hopfensitz, Astrid; De Neys, Wim

    2013-02-01

    The capacity to trust wisely is a critical facilitator of success and prosperity, and it has been conjectured that people of higher intelligence are better able to detect signs of untrustworthiness from potential partners. In contrast, this article reports five trust game studies suggesting that reading trustworthiness of the faces of strangers is a modular process. Trustworthiness detection from faces is independent of general intelligence (Study 1) and effortless (Study 2). Pictures that include nonfacial features such as hair and clothing impair trustworthiness detection (Study 3) by increasing reliance on conscious judgments (Study 4), but people largely prefer to make decisions from this sort of pictures (Study 5). In sum, trustworthiness detection in an economic interaction is a genuine and effortless ability, possessed in equal amount by people of all cognitive capacities, but whose impenetrability leads to inaccurate conscious judgments and inappropriate informational preferences. 2013 APA, all rights reserved

  4. Multistage audiovisual integration of speech: dissociating identification and detection.

    PubMed

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  5. Quadrant anode image sensor

    NASA Technical Reports Server (NTRS)

    Lampton, M.; Malina, R. F.

    1976-01-01

    A position-sensitive event-counting electronic readout system for microchannel plates (MCPs) is described that offers the advantages of high spatial resolution and fast time resolution. The technique relies upon a four-quadrant electron-collecting anode located behind the output face of the microchannel plate, so that the electron cloud from each detected event is partly intercepted by each of the four quadrants. The relative amounts of charge collected by each quadrant depend on event position, permitting each event to be localized with two ratio circuits. A prototype quadrant anode system for ion, electron, and extreme ultraviolet imaging is described. The spatial resolution achieved, about 10 microns, allows individual MCP channels to be distinguished.

  6. FaceIt: face recognition from static and live video for law enforcement

    NASA Astrophysics Data System (ADS)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  7. An intelligent crowdsourcing system for forensic analysis of surveillance video

    NASA Astrophysics Data System (ADS)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  8. Methods for Using Durable Adhesively Bonded Joints for Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III (Inventor); Lundgren, Eric C. (Inventor)

    2016-01-01

    Systems, methods, and apparatus for increasing durability of adhesively bonded joints in a sandwich structure. Such systems, methods, and apparatus includes an first face sheet and an second face sheet as well as an insert structure, the insert structure having a first insert face sheet, a second insert face sheet, and an insert core material. In addition, sandwich core material is arranged between the first face sheet and the second face sheet. A primary bondline may be coupled to the face sheet(s) and the splice. Further, systems, methods, and apparatus of the present disclosure advantageously reduce the load, provide a redundant path, reduce structural fatigue, and/or increase fatigue life.

  9. Dynamic Encoding of Face Information in the Human Fusiform Gyrus

    PubMed Central

    Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark

    2014-01-01

    Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825

  10. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  11. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    PubMed

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  12. Dynamic encoding of face information in the human fusiform gyrus.

    PubMed

    Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark

    2014-12-08

    Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.

  13. Does the perception of moving eyes trigger reflexive visual orienting in autism?

    PubMed Central

    Swettenham, John; Condie, Samantha; Campbell, Ruth; Milne, Elizabeth; Coleman, Mike

    2003-01-01

    Does movement of the eyes in one or another direction function as an automatic attentional cue to a location of interest? Two experiments explored the directional movement of the eyes in a full face for speed of detection of an aftercoming location target in young people with autism and in control participants. Our aim was to investigate whether a low-level perceptual impairment underlies the delay in gaze following characteristic of autism. The participants' task was to detect a target appearing on the left or right of the screen either 100 ms or 800 ms after a face cue appeared with eyes averting to the left or right. Despite instructions to ignore eye-movement in the face cue, people with autism and control adolescents were quicker to detect targets that had been preceded by an eye movement cue congruent with target location compared with targets preceded by an incongruent eye movement cue. The attention shifts are thought to be reflexive because the cue was to be ignored, and because the effect was found even when cue-target duration was short (100 ms). Because (experiment two) the effect persisted even when the face was inverted, it would seem that the direction of movement of eyes can provide a powerful (involuntary) cue to a location. PMID:12639330

  14. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  15. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  16. Impact Assessment of GNSS Spoofing Attacks on INS/GNSS Integrated Navigation System.

    PubMed

    Liu, Yang; Li, Sihai; Fu, Qiangwen; Liu, Zhenbo

    2018-05-04

    In the face of emerging Global Navigation Satellite System (GNSS) spoofing attacks, there is a need to give a comprehensive analysis on how the inertial navigation system (INS)/GNSS integrated navigation system responds to different kinds of spoofing attacks. A better understanding of the integrated navigation system’s behavior with spoofed GNSS measurements gives us valuable clues to develop effective spoofing defenses. This paper focuses on an impact assessment of GNSS spoofing attacks on the integrated navigation system Kalman filter’s error covariance, innovation sequence and inertial sensor bias estimation. A simple and straightforward measurement-level trajectory spoofing simulation framework is presented, serving as the basis for an impact assessment of both unsynchronized and synchronized spoofing attacks. Recommendations are given for spoofing detection and mitigation based on our findings in the impact assessment process.

  17. Deception Detection: The Relationship of Levels of Trust and Perspective Taking in Real-Time Online and Offline Communication Environments.

    PubMed

    Friend, Catherine; Fox Hamilton, Nicola

    2016-09-01

    Where humans have been found to detect lies or deception only at the rate of chance in offline face-to-face communication (F2F), computer-mediated communication (CMC) online can elicit higher rates of trust and sharing of personal information than F2F. How do levels of trust and empathetic personality traits like perspective taking (PT) relate to deception detection in real-time CMC compared to F2F? A between groups correlational design (N = 40) demonstrated that, through a paired deceptive conversation task with confederates, levels of participant trust could predict accurate detection online but not offline. Second, participant PT abilities could not predict accurate detection in either conversation medium. Finally, this study found that conversation medium also had no effect on deception detection. This study finds support for the effects of the Truth Bias and online disinhibition in deception, and further implications in law enforcement are discussed.

  18. Measuring the face-sensitive N170 with a gaming EEG system: A validation study.

    PubMed

    de Lissa, Peter; Sörensen, Sidsel; Badcock, Nicholas; Thie, Johnson; McArthur, Genevieve

    2015-09-30

    The N170 is a "face-sensitive" event-related potential (ERP) that occurs at around 170ms over occipito-temporal brain regions. The N170's potential to provide insight into the neural processing of faces in certain populations (e.g., children and adults with cognitive impairments) is limited by its measurement in scientific laboratories that can appear threatening to some people. The advent of cheap, easy-to-use portable gaming EEG systems provides an opportunity to record EEG in new contexts and populations. This study tested the validity of the face-sensitive N170 ERP measured with an adapted commercial EEG system (the Emotiv EPOC) that is used at home by gamers. The N170 recorded through both the gaming EEG system and the research EEG system exhibited face-sensitivity, with larger mean amplitudes in response to the face stimuli than the non-face stimuli, and a delayed N170 peak in response to face inversion. The EPOC system produced very similar N170 ERPs to a research-grade Neuroscan system, and was capable of recording face-sensitivity in the N170, validating its use as research tool in this arena. This opens new possibilities for measuring the face-sensitive N170 ERP in people who cannot travel to a traditional ERP laboratory (e.g., elderly people in care), who cannot tolerate laboratory conditions (e.g., people with autism), or who need to be tested in situ for practical or experimental reasons (e.g., children in schools). Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob; Reschke, Millard; Mulavara, Ajitkumar; Wood, Scott; Serrador, Jorge; Fiedler, Matthew; Kofman, Igor; Peters, Brian T.; Cohen, Helen

    2012-01-01

    Crewmembers returning from long-duration space flight face significant challenges due to the microgravity-induced inappropriate adaptations in balance/ sensorimotor function. The Neuroscience Laboratory at JSC is developing a method based on stochastic resonance to enhance the brain s ability to detect signals from the balance organs of the inner ear and use them for rapid improvement in balance skill, especially when combined with balance training exercises. This method involves a stimulus delivery system that is wearable/portable providing imperceptible electrical stimulation to the balance organs of the human body. Stochastic resonance (SR) is a phenomenon whereby the response of a nonlinear system to a weak periodic input signal is optimized by the presence of a particular non-zero level of noise. This phenomenon of SR is based on the concept of maximizing the flow of information through a system by a non-zero level of noise. Application of imperceptible SR noise coupled with sensory input in humans has been shown to improve motor, cardiovascular, visual, hearing, and balance functions. SR increases contrast sensitivity and luminance detection; lowers the absolute threshold for tone detection in normal hearing individuals; improves homeostatic function in the human blood pressure regulatory system; improves noise-enhanced muscle spindle function; and improves detection of weak tactile stimuli using mechanical or electrical stimulation. SR noise has been shown to improve postural control when applied as mechanical noise to the soles of the feet, or when applied as electrical noise at the knee and to the back muscles.

  20. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  1. Conceptual design and development of GEM based detecting system for tomographic tungsten focused transport monitoring

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Czarski, T.; Malinowski, K.; Kowalska-Strzęciwilk, E.; Poźniak, K.; Kasprowicz, G.; Zabołotny, W.; Wojeński, A.; Kolasiński, P.; Mazon, D.; Malard, P.

    2015-10-01

    Implementing tungsten as a plasma facing material in ITER and future fusion reactors will require effective monitoring of not just its level in the plasma but also its distribution. That can be successfully achieved using detectors based on Gas Electron Multiplier (GEM) technology. This work presents the conceptual design of the detecting unit for poloidal tomography to be tested at the WEST project tokamak. The current stage of the development is discussed covering aspects which include detector's spatial dimensions, gas mixtures, window materials and arrangements inside and outside the tokamak ports, details of detector's structure itself and details of the detecting module electronics. It is expected that the detecting unit under development, when implemented, will add to the safe operation of tokamak bringing the creation of sustainable nuclear fusion reactors a step closer. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics

  2. Observing real-time social interaction via telecommunication methods in budgerigars (Melopsittacus undulatus).

    PubMed

    Ikkatai, Yuko; Okanoya, Kazuo; Seki, Yoshimasa

    2016-07-01

    Humans communicate with one another not only face-to-face but also via modern telecommunication methods such as television and video conferencing. We readily detect the difference between people actively communicating with us and people merely acting via a broadcasting system. We developed an animal model of this novel communication method seen in humans to determine whether animals also make this distinction. We built a system for two animals to interact via audio-visual equipment in real-time, to compare behavioral differences between two conditions, an "interactive two-way condition" and a "non-interactive (one-way) condition." We measured birds' responses to stimuli which appeared in these two conditions. We used budgerigars, which are small, gregarious birds, and found that the frequency of vocal interaction with other individuals did not differ between the two conditions. However, body synchrony between the two birds was observed more often in the interactive condition, suggesting budgerigars recognized the difference between these interactive and non-interactive conditions on some level. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Early detection of ecosystem regime shifts: a multiple method evaluation for management application.

    PubMed

    Lindegren, Martin; Dakos, Vasilis; Gröger, Joachim P; Gårdmark, Anna; Kornilovs, Georgs; Otto, Saskia A; Möllmann, Christian

    2012-01-01

    Critical transitions between alternative stable states have been shown to occur across an array of complex systems. While our ability to identify abrupt regime shifts in natural ecosystems has improved, detection of potential early-warning signals previous to such shifts is still very limited. Using real monitoring data of a key ecosystem component, we here apply multiple early-warning indicators in order to assess their ability to forewarn a major ecosystem regime shift in the Central Baltic Sea. We show that some indicators and methods can result in clear early-warning signals, while other methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face of ecosystem change.

  4. Early Detection of Ecosystem Regime Shifts: A Multiple Method Evaluation for Management Application

    PubMed Central

    Lindegren, Martin; Dakos, Vasilis; Gröger, Joachim P.; Gårdmark, Anna; Kornilovs, Georgs; Otto, Saskia A.; Möllmann, Christian

    2012-01-01

    Critical transitions between alternative stable states have been shown to occur across an array of complex systems. While our ability to identify abrupt regime shifts in natural ecosystems has improved, detection of potential early-warning signals previous to such shifts is still very limited. Using real monitoring data of a key ecosystem component, we here apply multiple early-warning indicators in order to assess their ability to forewarn a major ecosystem regime shift in the Central Baltic Sea. We show that some indicators and methods can result in clear early-warning signals, while other methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face of ecosystem change. PMID:22808007

  5. Using Distance Sensors to Perform Collision Avoidance Maneuvres on Uav Applications

    NASA Astrophysics Data System (ADS)

    Raimundo, A.; Peres, D.; Santos, N.; Sebastião, P.; Souto, N.

    2017-08-01

    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. "Sense and Avoid" algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawk's flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the "Sense and Avoid" algorithm's overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and "Brake" mode on a real outdoor, proving its concepts.

  6. Countering MANPADS: study of new concepts and applications: part two

    NASA Astrophysics Data System (ADS)

    Maltese, Dominique; Vergnolle, Jean-François; Aragones, Julien; Renaudat, Mathieu

    2007-04-01

    The latest events of ground-to-air Man Portable Air Defense (MANPAD) attacks against aircraft have revealed a new threat both for military and civilian aircraft. Consequently, the implementation of protecting systems (i.e. Directed Infra Red Counter Measure - DIRCM) in order to face IR guided missiles turns out to be now inevitable. In a near future, aircraft will have to possess detection, tracking, identification, targeting and jamming capabilities to face MANPAD threats. Besides, Multiple Missiles attacks become more and more current scenarios to deal with. In this paper, a practical example of DIRCM systems under study at SAGEM DEFENSE & SECURITY Company is presented. The article is the continuation of a previous SPIE one. Self-protection solutions include built-in and automatic locking-on, tracking, identification and laser jamming capabilities, including defeat assessment. Target Designations are provided by a Missile Warning System. Targets scenarios including multiple threats are considered to design systems architectures. In a first step, the article reminds the context, current and future threats (IR seekers of different generations...), and scenarios for system definition. Then, it focuses on potential self-protection systems under study at SAGEM DEFENSE & SECURITY Company. Different strategies including target identification, multi band laser and active imagery have been previously studied in order to design DIRCM System solutions. Thus, results of self-protection scenarios are provided for different MANPAD scenarios to highlight key problems to solve. Data have been obtained from simulation software modeling full DIRCM systems architectures on technical and operational scenarios (parametric studies).

  7. Architectural design for a low cost FPGA-based traffic signal detection system in vehicles

    NASA Astrophysics Data System (ADS)

    López, Ignacio; Salvador, Rubén; Alarcón, Jaime; Moreno, Félix

    2007-05-01

    In this paper we propose an architecture for an embedded traffic signal detection system. Development of Advanced Driver Assistance Systems (ADAS) is one of the major trends of research in automotion nowadays. Examples of past and ongoing projects in the field are CHAMELEON ("Pre-Crash Application all around the vehicle" IST 1999-10108), PREVENT (Preventive and Active Safety Applications, FP6-507075, http://www.prevent-ip.org/) and AVRT in the US (Advanced Vision-Radar Threat Detection (AVRT): A Pre-Crash Detection and Active Safety System). It can be observed a major interest in systems for real-time analysis of complex driving scenarios, evaluating risk and anticipating collisions. The system will use a low cost CCD camera on the dashboard facing the road. The images will be processed by an Altera Cyclone family FPGA. The board does median and Sobel filtering of the incoming frames at PAL rate, and analyzes them for several categories of signals. The result is conveyed to the driver. The scarce resources provided by the hardware require an architecture developed for optimal use. The system will use a combination of neural networks and an adapted blackboard architecture. Several neural networks will be used in sequence for image analysis, by reconfiguring a single, generic hardware neural network in the FPGA. This generic network is optimized for speed, in order to admit several executions within the frame rate. The sequence will follow the execution cycle of the blackboard architecture. The global, blackboard architecture being developed and the hardware architecture for the generic, reconfigurable FPGA perceptron will be explained in this paper. The project is still at an early stage. However, some hardware implementation results are already available and will be offered in the paper.

  8. Lidar and Dial application for detection and identification: a proposal to improve safety and security

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Malizia, A.; Gelfusa, M.; Murari, A.; Parracino, S.; Poggi, L. A.; Lungaroni, M.; Ciparisse, J. F.; Di Giovanni, D.; Cenciarelli, O.; Carestia, M.; Peluso, E.; Gabbarini, V.; Talebzadeh, S.; Bellecci, C.

    2017-01-01

    Nowadays the intentional diffusion in air (both in open and confined environments) of chemical contaminants is a dramatic source of risk for the public health worldwide. The needs of a high-tech networks composed by software, diagnostics, decision support systems and cyber security tools are urging all the stakeholders (military, public, research & academic entities) to create innovative solutions to face this problem and improve both safety and security. The Quantum Electronics and Plasma Physics (QEP) Research Group of the University of Rome Tor Vergata is working since the 1960s on the development of laser-based technologies for the stand-off detection of contaminants in the air. Up to now, four demonstrators have been developed (two LIDAR-based and two DIAL-based) and have been used in experimental campaigns during all 2015. These systems and technologies can be used together to create an innovative solution to the problem of public safety and security: the creation of a network composed by detection systems: A low cost LIDAR based system has been tested in an urban area to detect pollutants coming from urban traffic, in this paper the authors show the results obtained in the city of Crotone (south of Italy). This system can be used as a first alarm and can be coupled with an identification system to investigate the nature of the threat. A laboratory dial based system has been used in order to create a database of absorption spectra of chemical substances that could be release in atmosphere, these spectra can be considered as the fingerprints of the substances that have to be identified. In order to create the database absorption measurements in cell, at different conditions, are in progress and the first results are presented in this paper.

  9. Reduced Processing of Facial and Postural Cues in Social Anxiety: Insights from Electrophysiology

    PubMed Central

    Rossignol, Mandy; Fisch, Sophie-Alexandra; Maurage, Pierre; Joassin, Frédéric; Philippot, Pierre

    2013-01-01

    Social anxiety is characterized by fear of evaluative interpersonal situations. Many studies have investigated the perception of emotional faces in socially anxious individuals and have reported biases in the processing of threatening faces. However, faces are not the only stimuli carrying an interpersonal evaluative load. The present study investigated the processing of emotional body postures in social anxiety. Participants with high and low social anxiety completed an attention-shifting paradigm using neutral, angry and happy faces and postures as cues. We investigated early visual processes through the P100 component, attentional fixation on the P2, structural encoding mirrored by the N170, and attentional orientation towards stimuli to detect with the P100 locked on target occurrence. Results showed a global reduction of P100 and P200 responses to faces and postures in socially anxious participants as compared to non-anxious participants, with a direct correlation between self-reported social anxiety levels and P100 and P200 amplitudes. Structural encoding of cues and target processing were not modulated by social anxiety, but socially anxious participants were slower to detect the targets. These results suggest a reduced processing of social postural and facial cues in social anxiety. PMID:24040403

  10. HappyFace as a generic monitoring tool for HEP experiments

    NASA Astrophysics Data System (ADS)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard

    2015-12-01

    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.

  11. Electric Field Sensor for Lightning Early Warning System

    NASA Astrophysics Data System (ADS)

    Premlet, B.; Mohammed, R.; Sabu, S.; Joby, N. E.

    2017-12-01

    Electric field mills are used popularly for atmospheric electric field measurements. Atmospheric Electric Field variation is the primary signature for Lightning Early Warning systems. There is a characteristic change in the atmospheric electric field before lightning during a thundercloud formation.A voltage controlled variable capacitance is being proposed as a method for non-contacting measurement of electric fields. A varactor based mini electric field measurement system is developed, to detect any change in the atmospheric electric field and to issue lightning early warning system. Since this is a low-cost device, this can be used for developing countries which are facing adversities. A network of these devices can help in forming a spatial map of electric field variations over a region, and this can be used for more improved atmospheric electricity studies in developing countries.

  12. Sudden infant death syndrome: a cybernetic etiology.

    PubMed

    ben-Aaron, M

    2003-01-01

    The brain's processes, by hypothesis, involve information processing by an extraordinarily complex, highly sophisticated, self-organizing cybernetic system embedded in the central nervous system. This cybernetic system generates itself in successive stages. Breathing is, by default, an autonomous function, but breath control is learned. If there is not a smooth transfer of function at the time when a successor system (one that enables autonomous breathing to be overridden by voluntary control) takes over, breathing may cease, without any overt cause being detectable, even with a thorough postmortem examination. If conditions are such that, at that point, the infant's body lacks the strength to resume breathing again under autonomic control, Sudden Infant Death Syndrome may result. The theory explains why infants are at greater risk if they sleep face down.

  13. Family impact of assistive technology scale: development of a measurement scale for parents of children with complex communication needs.

    PubMed

    Delarosa, Elizabeth; Horner, Stephanie; Eisenberg, Casey; Ball, Laura; Renzoni, Anne Marie; Ryan, Stephen E

    2012-09-01

    Young people use augmentative and alternative communication (AAC) systems to meet their everyday communication needs. However, the successful integration of an AAC system into a child's life requires strong commitment and continuous support from parents and other family members. This article describes the development and evaluation of the Family Impact of Assistive Technology Scale for AAC Systems - a parent-report questionnaire intended to detect the impact of AAC systems on the lives of children with complex communication needs and their families. The study involved 179 parents and clinical experts to test the content and face validities of the questionnaire, demonstrate its internal reliability and stability over time, and estimate its convergent construct validity when compared to a standardized measure of family impact.

  14. Eye coding mechanisms in early human face event-related potentials.

    PubMed

    Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G

    2014-11-10

    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.

  15. Colour detection thresholds in faces and colour patches.

    PubMed

    Tan, Kok Wei; Stephen, Ian D

    2013-01-01

    Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.

  16. Vision based interface system for hands free control of an Intelligent Wheelchair.

    PubMed

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-08-06

    Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people.

  17. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  18. Real-time driver fatigue detection based on face alignment

    NASA Astrophysics Data System (ADS)

    Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi

    2017-07-01

    The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.

  19. Autotracking from space - The TDRSS approach

    NASA Astrophysics Data System (ADS)

    Spearing, R. E.; Harper, W. R.

    The TDRSS will provide telecommunications support to near-earth orbiting satellites through the 1980s and into the 1990s. The system incorporates two operational satellites at geostationary altitude and a single ground station at White Sands, NM. Of the many tasks facing the engineering team in development of this system, one of the most challenging was K-band autotrack. An approach not previously attempted placed the error detection, processing, and feedback elements for automatic control of the TDR satellite antennas on the ground. This approach offered several advantages to the designers but posed a number of interesting questions during the development program. The autotrack system design and its test program are described with emphasis given to areas of special interest in developing a working K-band service.

  20. Autotracking from space - The TDRSS approach

    NASA Technical Reports Server (NTRS)

    Spearing, R. E.; Harper, W. R.

    1984-01-01

    The TDRSS will provide telecommunications support to near-earth orbiting satellites through the 1980s and into the 1990s. The system incorporates two operational satellites at geostationary altitude and a single ground station at White Sands, NM. Of the many tasks facing the engineering team in development of this system, one of the most challenging was K-band autotrack. An approach not previously attempted placed the error detection, processing, and feedback elements for automatic control of the TDR satellite antennas on the ground. This approach offered several advantages to the designers but posed a number of interesting questions during the development program. The autotrack system design and its test program are described with emphasis given to areas of special interest in developing a working K-band service.

Top