Sample records for facial motion capture

  1. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  2. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  3. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  4. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. What the Human Brain Likes About Facial Motion

    PubMed Central

    Schultz, Johannes; Brockhaus, Matthias; Bülthoff, Heinrich H.; Pilz, Karin S.

    2013-01-01

    Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces. PMID:22535907

  7. Contrasting Specializations for Facial Motion Within the Macaque Face-Processing System

    PubMed Central

    Fisher, Clark; Freiwald, Winrich A.

    2014-01-01

    SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903

  8. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  9. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  10. AMUC: Associated Motion capture User Categories.

    PubMed

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  11. Motion onset does not capture attention when subsequent motion is "smooth".

    PubMed

    Sunny, Meera Mary; von Mühlenen, Adrian

    2011-12-01

    Previous research on the attentional effects of moving objects has shown that motion per se does not capture attention. However, in later studies it was argued that the onset of motion does capture attention. Here, we show that this motion-onset effect critically depends on motion jerkiness--that is, the rate at which the moving stimulus is refreshed. Experiment 1 used search displays with a static, a motion-onset, and an abrupt-onset stimulus, while systematically varying the refresh rate of the moving stimulus. The results showed that motion onset only captures attention when subsequent motion is jerky (8 and 17 Hz), not when it is smooth (33 and 100 Hz). Experiment 2 replaced motion onset with continuous motion, showing that motion jerkiness does not affect how continuous motion is processed. These findings do not support accounts that assume a special role for motion onset, but they are in line with the more general unique-event account.

  12. Motion Analysis System for Instruction of Nihon Buyo using Motion Capture

    NASA Astrophysics Data System (ADS)

    Shinoda, Yukitaka; Murakami, Shingo; Watanabe, Yuta; Mito, Yuki; Watanuma, Reishi; Marumo, Mieko

    The passing on and preserving of advanced technical skills has become an important issue in a variety of fields, and motion analysis using motion capture has recently become popular in the research of advanced physical skills. This research aims to construct a system having a high on-site instructional effect on dancers learning Nihon Buyo, a traditional dance in Japan, and to classify Nihon Buyo dancing according to style, school, and dancer's proficiency by motion analysis. We have been able to study motion analysis systems for teaching Nihon Buyo now that body-motion data can be digitized and stored by motion capture systems using high-performance computers. Thus, with the aim of developing a user-friendly instruction-support system, we have constructed a motion analysis system that displays a dancer's time series of body motions and center of gravity for instructional purposes. In this paper, we outline this instructional motion analysis system based on three-dimensional position data obtained by motion capture. We also describe motion analysis that we performed based on center-of-gravity data obtained by this system and motion analysis focusing on school and age group using this system.

  13. Mobile Motion Capture--MiMiC.

    PubMed

    Harbert, Simeon D; Jaiswal, Tushar; Harley, Linda R; Vaughn, Tyler W; Baranak, Andrew S

    2013-01-01

    The low cost, simple, robust, mobile, and easy to use Mobile Motion Capture (MiMiC) system is presented and the constraints which guided the design of MiMiC are discussed. The MiMiC Android application allows motion data to be captured from kinematic modules such as Shimmer 2r sensors over Bluetooth. MiMiC is cost effective and can be used for an entire day in a person's daily routine without being intrusive. MiMiC is a flexible motion capture system which can be used for many applications including fall detection, detection of fatigue in industry workers, and analysis of individuals' work patterns in various environments.

  14. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  15. Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing

    PubMed Central

    Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2012-01-01

    We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561

  16. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  17. Animation control of surface motion capture.

    PubMed

    Tejera, Margara; Casas, Dan; Hilton, Adrian

    2013-12-01

    Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.

  18. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  19. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    PubMed

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K

    2015-01-01

    To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  20. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  1. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  2. Validation of the Leap Motion Controller using markered motion capture technology.

    PubMed

    Smeragliuolo, Anna H; Hill, N Jeremy; Disla, Luis; Putrino, David

    2016-06-14

    The Leap Motion Controller (LMC) is a low-cost, markerless motion capture device that tracks hand, wrist and forearm position. Integration of this technology into healthcare applications has begun to occur rapidly, making validation of the LMC׳s data output an important research goal. Here, we perform a detailed evaluation of the kinematic data output from the LMC, and validate this output against gold-standard, markered motion capture technology. We instructed subjects to perform three clinically-relevant wrist (flexion/extension, radial/ulnar deviation) and forearm (pronation/supination) movements. The movements were simultaneously tracked using both the LMC and a marker-based motion capture system from Motion Analysis Corporation (MAC). Adjusting for known inconsistencies in the LMC sampling frequency, we compared simultaneously acquired LMC and MAC data by performing Pearson׳s correlation (r) and root mean square error (RMSE). Wrist flexion/extension and radial/ulnar deviation showed good overall agreement (r=0.95; RMSE=11.6°, and r=0.92; RMSE=12.4°, respectively) with the MAC system. However, when tracking forearm pronation/supination, there were serious inconsistencies in reported joint angles (r=0.79; RMSE=38.4°). Hand posture significantly influenced the quality of wrist deviation (P<0.005) and forearm supination/pronation (P<0.001), but not wrist flexion/extension (P=0.29). We conclude that the LMC is capable of providing data that are clinically meaningful for wrist flexion/extension, and perhaps wrist deviation. It cannot yet return clinically meaningful data for measuring forearm pronation/supination. Future studies should continue to validate the LMC as updated versions of their software are developed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  4. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  5. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  6. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  7. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  8. Validation of enhanced kinect sensor based motion capturing for gait assessment

    PubMed Central

    Müller, Björn; Ilg, Winfried; Giese, Martin A.

    2017-01-01

    Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413

  9. Inertial Motion Capture Costume Design Study

    PubMed Central

    Szczęsna, Agnieszka; Skurowski, Przemysław; Lach, Ewa; Pruszowski, Przemysław; Pęszor, Damian; Paszkuta, Marcin; Słupik, Janusz; Lebek, Kamil; Janiak, Mateusz; Polański, Andrzej; Wojciechowski, Konrad

    2017-01-01

    The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system’s architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results. PMID:28304337

  10. Low-cost human motion capture system for postural analysis onboard ships

    NASA Astrophysics Data System (ADS)

    Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore

    2011-07-01

    The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.

  11. Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.

    PubMed

    Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah

    2016-01-01

    An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.

  12. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  13. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  14. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  15. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  16. Health Problems Discovery from Motion-Capture Data of Elderly

    NASA Astrophysics Data System (ADS)

    Pogorelc, B.; Gams, M.

    Rapid aging of the population of the developed countries could exceed the society's capacity for taking care for them. In order to help solving this problem, we propose a system for automatic discovery of health problems from motion-capture data of gait of elderly. The gait of the user is captured with the motion capture system, which consists of tags attached to the body and sensors situated in the apartment. Position of the tags is acquired by the sensors and the resulting time series of position coordinates are analyzed with machine learning algorithms in order to identify the specific health problem. We propose novel features for training a machine learning classifier that classifies the user's gait into: i) normal, ii) with hemiplegia, iii) with Parkinson's disease, iv) with pain in the back and v) with pain in the leg. Results show that naive Bayes needs more tags and less noise to reach classification accuracy of 98 % than support vector machines for 99 %.

  17. Derivation of capture probabilities for the corotation eccentric mean motion resonances

    NASA Astrophysics Data System (ADS)

    El Moutamid, Maryame; Sicardy, Bruno; Renner, Stéfan

    2017-08-01

    We study in this paper the capture of a massless particle into an isolated, first-order corotation eccentric resonance (CER), in the framework of the planar, eccentric and restricted three-body problem near a m + 1: m mean motion commensurability (m integer). While capture into Lindblad eccentric resonances (where the perturber's orbit is circular) has been investigated years ago, capture into CER (where the perturber's orbit is elliptic) has not yet been investigated in detail. Here, we derive the generic equations of motion near a CER in the general case where both the perturber and the test particle migrate. We derive the probability of capture in that context, and we examine more closely two particular cases: (I) if only the perturber is migrating, capture is possible only if the migration is outward from the primary. Notably, the probability of capture is independent of the way the perturber migrates outward; (II) if only the test particle is migrating, then capture is possible only if the algebraic value of its migration rate is a decreasing function of orbital radius. In this case, the probability of capture is proportional to the radial gradient of migration. These results differ from the capture into Lindblad eccentric resonance (LER), where it is necessary that the orbits of the perturber and the test particle converge for capture to be possible.

  18. An automated time and hand motion analysis based on planar motion capture extended to a virtual environment

    NASA Astrophysics Data System (ADS)

    Tinoco, Hector A.; Ovalle, Alex M.; Vargas, Carlos A.; Cardona, María J.

    2015-09-01

    In the context of industrial engineering, the predetermined time systems (PTS) play an important role in workplaces because inefficiencies are found in assembly processes that require manual manipulations. In this study, an approach is proposed with the aim to analyze time and motions in a manual process using a capture motion system embedded to a virtual environment. Capture motion system tracks IR passive markers located on the hands to take the positions of each one. For our purpose, a real workplace is virtually represented by domains to create a virtual workplace based on basic geometries. Motion captured data are combined with the virtual workplace to simulate operations carried out on it, and a time and motion analysis is completed by means of an algorithm. To test the methodology of analysis, a case study was intentionally designed using and violating the principles of motion economy. In the results, it was possible to observe where the hands never crossed as well as where the hands passed by the same place. In addition, the activities done in each zone were observed and some known deficiencies were identified in the distribution of the workplace by computational analysis. Using a frequency analysis of hand velocities, errors in the chosen assembly method were revealed showing differences in the hand velocities. An opportunity is seen to classify some quantifiable aspects that are not identified easily in a traditional time and motion analysis. The automated analysis is considered as the main contribution in this study. In the industrial context, a great application is perceived in terms of monitoring the workplace to analyze repeatability, PTS, workplace and labor activities redistribution using the proposed methodology.

  19. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  20. Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation

    NASA Astrophysics Data System (ADS)

    Nakata, Robert

    Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.

  1. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an

  2. Inertial motion capture system for biomechanical analysis in pressure suits

    NASA Astrophysics Data System (ADS)

    Di Capua, Massimiliano

    A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced

  3. Accuracy of human motion capture systems for sport applications; state-of-the-art review.

    PubMed

    van der Kruk, Eline; Reijne, Marco M

    2018-05-09

    Sport research often requires human motion capture of an athlete. It can, however, be labour-intensive and difficult to select the right system, while manufacturers report on specifications which are determined in set-ups that largely differ from sport research in terms of volume, environment and motion. The aim of this review is to assist researchers in the selection of a suitable motion capture system for their experimental set-up for sport applications. An open online platform is initiated, to support (sport)researchers in the selection of a system and to enable them to contribute and update the overview. systematic review; Method: Electronic searches in Scopus, Web of Science and Google Scholar were performed, and the reference lists of the screened articles were scrutinised to determine human motion capture systems used in academically published studies on sport analysis. An overview of 17 human motion capture systems is provided, reporting the general specifications given by the manufacturer (weight and size of the sensors, maximum capture volume, environmental feasibilities), and calibration specifications as determined in peer-reviewed studies. The accuracy of each system is plotted against the measurement range. The overview and chart can assist researchers in the selection of a suitable measurement system. To increase the robustness of the database and to keep up with technological developments, we encourage researchers to perform an accuracy test prior to their experiment and to add to the chart and the system overview (online, open access).

  4. Effective motion planning strategy for space robot capturing targets under consideration of the berth position

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Jinguo

    2018-07-01

    Although many motion planning strategies for missions involving space robots capturing floating targets can be found in the literature, relatively little has discussed how to select the berth position where the spacecraft base hovers. In fact, the berth position is a flexible and controllable factor, and selecting a suitable berth position has a great impact on improving the efficiency of motion planning in the capture mission. Therefore, to make full use of the manoeuvrability of the space robot, this paper proposes a new viewpoint that utilizes the base berth position as an optimizable parameter to formulate a more comprehensive and effective motion planning strategy. Considering the dynamic coupling, the dynamic singularities, and the physical limitations of space robots, a unified motion planning framework based on the forward kinematics and parameter optimization technique is developed to convert the planning problem into the parameter optimization problem. For getting rid of the strict grasping position constraints in the capture mission, a new conception of grasping area is proposed to greatly simplify the difficulty of the motion planning. Furthermore, by utilizing the penalty function method, a new concise objective function is constructed. Here, the intelligent algorithm, Particle Swarm Optimization (PSO), is worked as solver to determine the free parameters. Two capturing cases, i.e., capturing a two-dimensional (2D) planar target and capturing a three-dimensional (3D) spatial target, are studied under this framework. The corresponding simulation results demonstrate that the proposed method is more efficient and effective for planning the capture missions.

  5. A computer analysis of reflex eyelid motion in normal subjects and in facial neuropathy.

    PubMed

    Somia, N N; Rash, G S; Epstein, E E; Wachowiak, M; Sundine, M J; Stremel, R W; Barker, J H; Gossman, D

    2000-12-01

    To demonstrate how computerized eyelid motion analysis can quantify the human reflex blink. Seventeen normal subjects and 10 patients with unilateral facial nerve paralysis were analyzed. Eyelid closure is currently evaluated by systems primarily designed to assess lower/midfacial movements. The methods are subjective, difficult to reproduce, and measure only volitional closure. Reflex closure is responsible for eye hydration, and its evaluation demands dynamic analysis. A 60Hz video camera incorporated into a helmet was used to analyze blinking. Reflective markers on the forehead and eyelids allowed for the dynamic measurement of the reflex blink. Eyelid displacement, velocity and acceleration were calculated. The degree of synchrony between bilateral blinks was also determined. This study demonstrates that video motion analysis can describe normal and altered eyelid motions in a quantifiable manner. To our knowledge, this is the first study to measure dynamic reflex blinks. Eyelid closure may now be evaluated in kinematic terms. This technique could increase understanding of eyelid motion and permit more accurate evaluation of eyelid function. Dynamic eyelid evaluation has immediate applications in the treatment of facial palsy affecting the reflex blink. Relevance No method has been developed that objectively quantifies dynamic eyelid closure. Methods currently in use evaluate only volitional eyelid closure, and are based on direct and indirect observer assessments. These methods are subjective and are incapable of analyzing dynamic eyelid movements, which are critical to maintenance of corneal hydration and comfort. A system that quantifies eyelid kinematics can provide a functional analysis of blink disorders and an objective evaluation of their treatment(s).

  6. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  7. Miniature low-power inertial sensors: promising technology for implantable motion capture systems.

    PubMed

    Lambrecht, Joris M; Kirsch, Robert F

    2014-11-01

    Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.

  8. Motion capture based identification of the human body inertial parameters.

    PubMed

    Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko

    2008-01-01

    Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.

  9. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  10. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. Assessment of congruence and impingement of the hip joint in professional ballet dancers: a motion capture study.

    PubMed

    Charbonnier, Caecilia; Kolo, Frank C; Duthon, Victoria B; Magnenat-Thalmann, Nadia; Becker, Christoph D; Hoffmeyer, Pierre; Menetrey, Jacques

    2011-03-01

    Early hip osteoarthritis in dancers could be explained by femoroacetabular impingements. However, there is a lack of validated noninvasive methods and dynamic studies to ascertain impingement during motion. Moreover, it is unknown whether the femoral head and acetabulum are congruent in typical dancing positions. The practice of some dancing movements could cause a loss of hip joint congruence and recurrent impingements, which could lead to early osteoarthritis. Descriptive laboratory study. Eleven pairs of female dancer's hips were motion captured with an optical tracking system while performing 6 different dancing movements. The resulting computed motions were applied to patient-specific hip joint 3-dimensional models based on magnetic resonance images. While visualizing the dancer's hip in motion, the authors detected impingements using computer-assisted techniques. The range of motion and congruence of the hip joint were also quantified in those 6 recorded dancing movements. The frequency of impingement and subluxation varied with the type of movement. Four dancing movements (développé à la seconde, grand écart facial, grand écart latéral, and grand plié) seem to induce significant stress in the hip joint, according to the observed high frequency of impingement and amount of subluxation. The femoroacetabular translations were high (range, 0.93 to 6.35 mm). For almost all movements, the computed zones of impingement were mainly located in the superior or posterosuperior quadrant of the acetabulum, which was relevant with respect to radiologically diagnosed damaged zones in the labrum. All dancers' hips were morphologically normal. Impingements and subluxations are frequently observed in typical ballet movements, causing cartilage hypercompression. These movements should be limited in frequency. The present study indicates that some dancing movements could damage the hip joint, which could lead to early osteoarthritis.

  12. Projectile Motion on an Inclined Misty Surface: I. Capturing and Analysing the Trajectory

    ERIC Educational Resources Information Center

    Ho, S. Y.; Foong, S. K.; Lim, C. H.; Lim, C. C.; Lin, K.; Kuppan, L.

    2009-01-01

    Projectile motion is usually the first non-uniform two-dimensional motion that students will encounter in a pre-university physics course. In this article, we introduce a novel technique for capturing the trajectory of projectile motion on an inclined Perspex plane. This is achieved by coating the Perspex with a thin layer of fine water droplets…

  13. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  14. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  15. Kinematic differences between optical motion capture and biplanar videoradiography during a jump-cut maneuver

    PubMed Central

    Miranda, Daniel L; Rainbow, Michael J; Crisco, Joseph J; Fleming, Braden C

    2012-01-01

    Jumping and cutting activities are investigated in many laboratories attempting to better understand the biomechanics associated with non-contact ACL injury. Optical motion capture is widely used; however, it is subject to soft tissue artifact (STA). Biplanar videoradiography offers a unique approach to collecting skeletal motion without STA. The goal of this study was to compare how STA affects the six-degree-of-freedom motion of the femur and tibia during a jump-cut maneuver associated with non-contact ACL injury. Ten volunteers performed a jump-cut maneuver while their landing leg was imaged using optical motion capture (OMC) and biplanar videoradiography. The within-bone motion differences were compared using anatomical coordinate systems for the femur and tibia, respectively. The knee joint kinematic measurements were compared during two periods: before and after ground contact. Over the entire activity, the within-bone motion differences between the two motion capture techniques were significantly lower for the tibia than the femur for two of the rotational axes (flexion/extension, internal/external) and the origin. The OMC and biplanar videoradiography knee joint kinematics were in best agreement before landing. Kinematic deviations between the two techniques increased significantly after contact. This study provides information on the kinematic discrepancies between OMC and biplanar videoradiography that can be used to optimize methods employing both technologies for studying dynamic in vivo knee kinematics and kinetics during a jump-cut maneuver. PMID:23084785

  16. Evaluating Perceived Naturalness of Facial Expression After Fillers to the Nasolabial Folds and Lower Face With Standardized Video and Photography.

    PubMed

    Philipp-Dormston, Wolfgang G; Wong, Cindy; Schuster, Bernd; Larsson, Markus K; Podda, Maurizio

    2018-06-01

    Hyaluronic acid (HA) fillers are commonly used in treating facial wrinkles and folds but have not been studied with standardized methodology to include assessment of standard facial expressions. To assess perceived naturalness of facial expression after treatment with 2 HA fillers manufactured with XpresHAn Technology (also known as Optimal Balance Technology). Treatment was directed to the nasolabial folds (NLFs) and at least 1 additional lower face wrinkle or fold. Maintenance of naturalness, attractiveness, and age at 1 month after optimal treatment were assessed using video recordings and photographs capturing different facial animations. Global aesthetic improvement, subjects' satisfaction, and safety were also evaluated. The treatment was well tolerated. Naturalness of facial expression in motion was determined to be at least maintained in 95% of subjects. Attractiveness was enhanced in 89% of subjects and 79% of subjects were considered to look younger. Most subjects assessed their aesthetic appearance as improved and were satisfied with their treatment. Naturalness and attractiveness can be assessed using video recordings and photographs capturing different facial animations. XpresHAn Technology HA filler treatments create natural-looking results with high subject satisfaction.

  17. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    PubMed

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use

  18. Reference equations of motion for automatic rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1992-01-01

    The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.

  19. A common framework for the analysis of complex motion? Standstill and capture illusions

    PubMed Central

    Dürsteler, Max R.

    2014-01-01

    A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e., modulation of luminance, color, depth, etc.). When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures) and motion transparency (the ability to perceive motion of both surfaces simultaneously). Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof) of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth transitions between their colors. This suggests that in respect to color motion perception the complex motions' pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual opponent color

  20. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology

    PubMed Central

    Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila

    2017-01-01

    Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329

  1. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  2. Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System

    NASA Astrophysics Data System (ADS)

    Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.

    2017-01-01

    ‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.

  3. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  4. A Virtual Reality Dance Training System Using Motion Capture Technology

    ERIC Educational Resources Information Center

    Chan, J. C. P.; Leung, H.; Tang, J. K. T.; Komura, T.

    2011-01-01

    In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate…

  5. Satellite attitude motion models for capture and retrieval investigations

    NASA Technical Reports Server (NTRS)

    Cochran, John E., Jr.; Lahr, Brian S.

    1986-01-01

    The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.

  6. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  7. Emotional facial activation induced by unconsciously perceived dynamic facial expressions.

    PubMed

    Kaiser, Jakob; Davey, Graham C L; Parkhouse, Thomas; Meeres, Jennifer; Scott, Ryan B

    2016-12-01

    Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. The Perception of Facial Expressions and Stimulus Motion by Two- and Five-Month-Old Infants Using Holographic Stimuli.

    ERIC Educational Resources Information Center

    Nelson, Charles A.; Horowitz, Frances Degen

    1983-01-01

    Holograms of faces were used to study two- and five-month-old infants' discriminations of changes in facial expression and pose when the stimulus was seen to move or to remain stationary. While no evidence was found suggesting that infants preferred the moving face, evidence indicated that motion contrasts facilitate face recognition. (Author/RH)

  9. An effective attentional set for a specific colour does not prevent capture by infrequently presented motion distractors.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-01-01

    An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.

  10. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  11. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    NASA Astrophysics Data System (ADS)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  12. Octopus: A Design Methodology for Motion Capture Wearables

    PubMed Central

    2017-01-01

    Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them. PMID:28809786

  13. Octopus: A Design Methodology for Motion Capture Wearables.

    PubMed

    Marin, Javier; Blanco, Teresa; Marin, Jose J

    2017-08-15

    Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them.

  14. Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture

    PubMed Central

    Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.

    2016-01-01

    Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857

  15. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  16. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  17. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  18. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  19. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  20. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    PubMed Central

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-01-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile. PMID:26911450

  1. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    NASA Astrophysics Data System (ADS)

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-02-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.

  2. A novel method to measure conspicuous facial pores using computer analysis of digital-camera-captured images: the effect of glycolic acid chemical peeling.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji

    2011-11-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.

  3. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  4. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  6. FuryExplorer: visual-interactive exploration of horse motion capture data

    NASA Astrophysics Data System (ADS)

    Wilhelm, Nils; Vögele, Anna; Zsoldos, Rebeka; Licka, Theresia; Krüger, Björn; Bernard, Jürgen

    2015-01-01

    The analysis of equine motion has a long tradition in the past of mankind. Equine biomechanics aims at detecting characteristics of horses indicative of good performance. Especially, veterinary medicine gait analysis plays an important role in diagnostics and in the emerging research of long-term effects of athletic exercises. More recently, the incorporation of motion capture technology contributed to an easier and faster analysis, with a trend from mere observation of horses towards the analysis of multivariate time-oriented data. However, due to the novelty of this topic being raised within an interdisciplinary context, there is yet a lack of visual-interactive interfaces to facilitate time series data analysis and information discourse for the veterinary and biomechanics communities. In this design study, we bring visual analytics technology into the respective domains, which, to our best knowledge, was never approached before. Based on requirements developed in the domain characterization phase, we present a visual-interactive system for the exploration of horse motion data. The system provides multiple views which enable domain experts to explore frequent poses and motions, but also to drill down to interesting subsets, possibly containing unexpected patterns. We show the applicability of the system in two exploratory use cases, one on the comparison of different gait motions, and one on the analysis of lameness recovery. Finally, we present the results of a summative user study conducted in the environment of the domain experts. The overall outcome was a significant improvement in effectiveness and efficiency in the analytical workflow of the domain experts.

  7. Effects of glycolic acid chemical peeling on facial pigment deposition: evaluation using novel computer analysis of digital-camera-captured images.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Suzuki, Kenji; Kusumoto, Kenji

    2013-12-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological cosmetic medicine. However, the improvements seen with chemical peeling are often very minor, and it is difficult to conduct a quantitative assessment of pre- and post-treatment appearance. We report the pre- and postpeeling effects for facial pigment deposition using a novel computer analysis method for digital-camera-captured images. Glycolic acid chemical peeling was performed a total of 5 times at 2-week intervals in 23 healthy women. We conducted a computer image analysis by utilizing Robo Skin Analyzer CS 50 and Clinical Suite 2.1 and then reviewed each parameter for the area of facial pigment deposition pre- and post-treatment. Parameters were pigmentation size and four pigmentation categories: little pigmentation and three levels of marked pigmentation (Lv1, 2, and 3) based on detection threshold. Each parameter was measured, and the total area of facial pigmentation was calculated. The total area of little pigmentation and marked pigmentation (Lv1) was significantly reduced. On the other hand, a significant difference was not observed for the total area of marked pigmentation Lv2 and Lv3. This suggests that glycolic acid chemical peeling has an effect on small facial pigment disposition or has an effect on light pigment deposition. As the Robo Skin Analyzer is useful for objectively quantifying and analyzing minor changes in facial skin, it is considered to be an effective tool for accumulating treatment evidence in the cosmetic and esthetic skin field. © 2013 Wiley Periodicals, Inc.

  8. A motion capture library for the study of identity, gender, and emotion perception from biological motion.

    PubMed

    Ma, Yingliang; Paterson, Helena M; Pollick, Frank E

    2006-02-01

    We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.

  9. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  10. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  11. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  12. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  13. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  14. Evaluation of a portable markerless finger position capture device: accuracy of the Leap Motion controller in healthy adults.

    PubMed

    Tung, James Y; Lulic, Tea; Gonzalez, Dave A; Tran, Johnathan; Dickerson, Clark R; Roy, Eric A

    2015-05-01

    Although motion analysis is frequently employed in upper limb motor assessment (e.g. visually-guided reaching), they are resource-intensive and limited to laboratory settings. This study evaluated the reliability and accuracy of a new markerless motion capture device, the Leap Motion controller, to measure finger position. Testing conditions that influence reliability and agreement between the Leap and a research-grade motion capture system were examined. Nine healthy young adults pointed to 15 targets on a computer screen under two conditions: (1) touching the target (touch) and (2) 4 cm away from the target (no-touch). Leap data was compared to an Optotrak marker attached to the index finger. Across all trials, root mean square (RMS) error of the Leap system was 17.30  ±  9.56 mm (mean ± SD), sampled at 65.47  ±  21.53 Hz. The % viable trials and mean sampling rate were significantly lower in the touch condition (44% versus 64%, p < 0.001; 52.02  ±  2.93 versus 73.98  ±  4.48 Hz, p = 0.003). While linear correlations were high (horizontal: r(2) = 0.995, vertical r(2) = 0.945), the limits of agreement were large (horizontal: -22.02 to +26.80 mm, vertical: -29.41 to +30.14 mm). While not as precise as more sophisticated optical motion capture systems, the Leap Motion controller is sufficiently reliable for measuring motor performance in pointing tasks that do not require high positional accuracy (e.g. reaction time, Fitt's, trails, bimanual coordination).

  15. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  16. Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing

    NASA Astrophysics Data System (ADS)

    Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel

    The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.

  17. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  18. [An Introduction to A Newly-developed "Acupuncture Needle Manipulation Training-evaluation System" Based on Optical Motion Capture Technique].

    PubMed

    Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo

    2016-12-25

    In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.

  19. Quantitative facial asymmetry: using three-dimensional photogrammetry to measure baseline facial surface symmetry.

    PubMed

    Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R

    2014-01-01

    Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed

  20. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the

  1. Capture by colour: evidence for dimension-specific singleton capture.

    PubMed

    Harris, Anthony M; Becker, Stefanie I; Remington, Roger W

    2015-10-01

    Previous work on attentional capture has shown the attentional system to be quite flexible in the stimulus properties it can be set to respond to. Several different attentional "modes" have been identified. Feature search mode allows attention to be set for specific features of a target (e.g., red). Singleton detection mode sets attention to respond to any discrepant item ("singleton") in the display. Relational search sets attention for the relative properties of the target in relation to the distractors (e.g., redder, larger). Recently, a new attentional mode was proposed that sets attention to respond to any singleton within a particular feature dimension (e.g., colour; Folk & Anderson, 2010). We tested this proposal against the predictions of previously established attentional modes. In a spatial cueing paradigm, participants searched for a colour target that was randomly either red or green. The nature of the attentional control setting was probed by presenting an irrelevant singleton cue prior to the target display and assessing whether it attracted attention. In all experiments, the cues were red, green, blue, or a white stimulus rapidly rotated (motion cue). The results of three experiments support the existence of a "colour singleton set," finding that all colour cues captured attention strongly, while motion cues captured attention only weakly or not at all. Notably, we also found that capture by motion cues in search for colour targets was moderated by their frequency; rare motion cues captured attention (weakly), while frequent motion cues did not.

  2. Relationships of a Circular Singer Arm Gesture to Acoustical and Perceptual Measures of Singing: A Motion Capture Study

    ERIC Educational Resources Information Center

    Brunkan, Melissa C.

    2016-01-01

    The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…

  3. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Facial Expression Influences Face Identity Recognition During the Attentional Blink

    PubMed Central

    2014-01-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076

  5. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  6. Facial expression system on video using widrow hoff

    NASA Astrophysics Data System (ADS)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  7. Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.

    PubMed

    Kim, Jin

    2013-02-01

    The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical

  8. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  9. Applied research of embedded WiFi technology in the motion capture system

    NASA Astrophysics Data System (ADS)

    Gui, Haixia

    2012-04-01

    Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.

  10. Exploiting Motion Capture to Enhance Avoidance Behaviour in Games

    NASA Astrophysics Data System (ADS)

    van Basten, Ben J. H.; Jansen, Sander E. M.; Karamouzas, Ioannis

    Realistic simulation of interacting virtual characters is essential in computer games, training and simulation applications. The problem is very challenging since people are accustomed to real-world situations and thus, they can easily detect inconsistencies and artifacts in the simulations. Over the past twenty years several models have been proposed for simulating individuals, groups and crowds of characters. However, little effort has been made to actually understand how humans solve interactions and avoid inter-collisions in real-life. In this paper, we exploit motion capture data to gain more insights into human-human interactions. We propose four measures to describe the collision-avoidance behavior. Based on these measures, we extract simple rules that can be applied on top of existing agent and force based approaches, increasing the realism of the resulting simulations.

  11. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  12. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  13. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  14. 4D computed tomography scans for conformal thoracic treatment planning: is a single scan sufficient to capture thoracic tumor motion?

    NASA Astrophysics Data System (ADS)

    Tseng, Yolanda D.; Wootton, Landon; Nyflot, Matthew; Apisarnthanarax, Smith; Rengan, Ramesh; Bloch, Charles; Sandison, George; St. James, Sara

    2018-01-01

    Four dimensional computed tomography (4DCT) scans are routinely used in radiation therapy to determine the internal treatment volume for targets that are moving (e.g. lung tumors). The use of these studies has allowed clinicians to create target volumes based upon the motion of the tumor during the imaging study. The purpose of this work is to determine if a target volume based on a single 4DCT scan at simulation is sufficient to capture thoracic motion. Phantom studies were performed to determine expected differences between volumes contoured on 4DCT scans and those on the evaluation CT scans (slow scans). Evaluation CT scans acquired during treatment of 11 patients were compared to the 4DCT scans used for treatment planning. The images were assessed to determine if the target remained within the target volume determined during the first 4DCT scan. A total of 55 slow scans were compared to the 11 planning 4DCT scans. Small differences were observed in phantom between the 4DCT volumes and the slow scan volumes, with a maximum of 2.9%, that can be attributed to minor differences in contouring and the ability of the 4DCT scan to adequately capture motion at the apex and base of the motion trajectory. Larger differences were observed in the patients studied, up to a maximum volume difference of 33.4%. These results demonstrate that a single 4DCT scan is not adequate to capture all thoracic motion throughout treatment.

  15. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  16. Validation of Attitude and Heading Reference System and Microsoft Kinect for Continuous Measurement of Cervical Range of Motion Compared to the Optical Motion Capture System.

    PubMed

    Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon

    2016-08-01

    To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.

  17. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow

    PubMed Central

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L.; Migliore, Elaina M.; Chipps, Esther M.; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today’s dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives. PMID:28269924

  18. On the correlation between motion data captured from low-cost gaming controllers and high precision encoders.

    PubMed

    Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K

    2012-01-01

    Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.

  19. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow.

    PubMed

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.

  20. A Facial Control Method Using Emotional Parameters in Sensibility Robot

    NASA Astrophysics Data System (ADS)

    Shibata, Hiroshi; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori

    The “Ifbot” robot communicates with people by considering its own “emotions”. Ifbot has many facial expressions to communicate enjoyment. These are used to express its internal emotions, purposes, reactions caused by external stimulus, and entertainment such as singing songs. All these facial expressions are developed by designers manually. Using this approach, we must design all facial motions, if we want Ifbot to express them. It, however, is not realistic. We have therefore developed a system which convert Ifbot's emotions to its facial expressions automatically. In this paper, we propose a method for creating Ifbot's facial expressions from parameters, emotional parameters, which handle its internal emotions computationally.

  1. Identification of pre-impact conditions of a cyclist involved in a vehicle-bicycle accident using an optimized MADYMO reconstruction combined with motion capture.

    PubMed

    Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu

    2018-05-01

    The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.

  2. Three-dimensional quantification of cardiac surface motion: a newly developed three-dimensional digital motion-capture and reconstruction system for beating heart surgery.

    PubMed

    Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi

    2006-11-01

    This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P < .0001) and maximum acceleration (left anterior descending 238.8 +/- 137.4 vs 169.4 +/- 132.7 m/s2; right coronary artery 315.0 +/- 123.9 vs 242.9 +/- 120.6 m/s2; left circumflex 307.9 +/- 151.0 vs 217.2 +/- 132.3 m/s2; P < .0001). This system is useful for a precise quantification of the heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.

  3. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery

  4. Using motion capture technology to measure the effects of magnification loupes on dental operator posture: A pilot study.

    PubMed

    Branson, B G; Abnos, R M; Simmer-Beck, M L; King, G W; Siddicky, S F

    2018-01-01

    Motion analysis has great potential for quantitatively evaluating dental operator posture and the impact of interventions such as magnification loupes on posture and subsequent development of musculoskeletal disorders. This study sought to determine the feasibility of motion capture technology for measurement of dental operator posture and examine the impact that different styles of magnification loupes had on dental operator posture. Forward and lateral head flexion were measured for two different operators while completing a periodontal probing procedure. Each was measured while wearing magnification loupes (flip up-FL and through the lens-TTL) and basic safety lenses. Operators both exhibited reduced forward flexion range of motion (ROM) when using loupes (TTL or FL) compared to a baseline lens (BL). In contrast to forward flexion, no consistent trends were observed for lateral flexion between subjects. The researchers can report that it is possible to measure dental operator posture using motion capture technology. More study is needed to determine which type of magnification loupes (FL or TTL) are superior in improving dental operator posture. Some evidence was found supporting that the quality of operator posture may more likely be related to the use of magnification loupes, rather than the specific type of lenses worn.

  5. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  6. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  7. A Single Camera Motion Capture System for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  8. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  9. Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.

    PubMed

    Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.

  10. The identification of unfolding facial expressions.

    PubMed

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  11. Motion-artifact-robust, polarization-resolved second-harmonic-generation microscopy based on rapid polarization switching with electro-optic Pockells cell and its application to in vivo visualization of collagen fiber orientation in human facial skin

    PubMed Central

    Tanaka, Yuji; Hase, Eiji; Fukushima, Shuichiro; Ogura, Yuki; Yamashita, Toyonobu; Hirao, Tetsuji; Araki, Tsutomu; Yasui, Takeshi

    2014-01-01

    Polarization-resolved second-harmonic-generation (PR-SHG) microscopy is a powerful tool for investigating collagen fiber orientation quantitatively with low invasiveness. However, the waiting time for the mechanical polarization rotation makes it too sensitive to motion artifacts and hence has hampered its use in various applications in vivo. In the work described in this article, we constructed a motion-artifact-robust, PR-SHG microscope based on rapid polarization switching at every pixel with an electro-optic Pockells cell (PC) in synchronization with step-wise raster scanning of the focus spot and alternate data acquisition of a vertical-polarization-resolved SHG signal and a horizontal-polarization-resolved one. The constructed PC-based PR-SHG microscope enabled us to visualize orientation mapping of dermal collagen fiber in human facial skin in vivo without the influence of motion artifacts. Furthermore, it implied the location and/or age dependence of the collagen fiber orientation in human facial skin. The robustness to motion artifacts in the collagen orientation measurement will expand the application scope of SHG microscopy in dermatology and collagen-related fields. PMID:24761292

  12. A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests

    PubMed Central

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-01-01

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600

  13. A new position measurement system using a motion-capture camera for wind tunnel tests.

    PubMed

    Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok

    2013-09-13

    Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.

  14. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  15. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    PubMed

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  16. Three-dimensional analysis of facial morphology.

    PubMed

    Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng

    2014-09-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  17. Fixation not required: characterizing oculomotor attention capture for looming stimuli.

    PubMed

    Lewis, Joanna E; Neider, Mark B

    2015-10-01

    A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion.

  18. Photo anthropometric variations in Japanese facial features: Establishment of large-sample standard reference data for personal identification using a three-dimensional capture system.

    PubMed

    Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K

    2015-12-01

    This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The

  19. Effects of damping head movement and facial expression in dyadic conversation using real–time facial expression tracking and synthesized avatars

    PubMed Central

    Boker, Steven M.; Cohn, Jeffrey F.; Theobald, Barry-John; Matthews, Iain; Brick, Timothy R.; Spies, Jeffrey R.

    2009-01-01

    When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements. PMID:19884143

  20. Nerve crush but not displacement-induced stretch of the intra-arachnoidal facial nerve promotes facial palsy after cerebellopontine angle surgery.

    PubMed

    Bendella, Habib; Brackmann, Derald E; Goldbrunner, Roland; Angelov, Doychin N

    2016-10-01

    Little is known about the reasons for occurrence of facial nerve palsy after removal of cerebellopontine angle tumors. Since the intra-arachnoidal portion of the facial nerve is considered to be so vulnerable that even the slightest tension or pinch may result in ruptured axons, we tested whether a graded stretch or controlled crush would affect the postoperative motor performance of the facial (vibrissal) muscle in rats. Thirty Wistar rats, divided into five groups (one with intact controls and four with facial nerve lesions), were used. Under inhalation anesthesia, the occipital squama was opened, the cerebellum gently retracted to the left, and the intra-arachnoidal segment of the right facial nerve exposed. A mechanical displacement of the brainstem with 1 or 3 mm toward the midline or an electromagnet-controlled crush of the facial nerve with a tweezers at a closure velocity of 50 and 100 mm/s was applied. On the next day, whisking motor performance was determined by video-based motion analysis. Even the larger (with 3 mm) mechanical displacement of the brainstem had no harmful effect: The amplitude of the vibrissal whisks was in the normal range of 50°-60°. On the other hand, even the light nerve crush (50 mm/s) injured the facial nerve and resulted in paralyzed vibrissal muscles (amplitude of 10°-15°). We conclude that, contrary to the generally acknowledged assumptions, it is the nerve crush but not the displacement-induced stretching of the intra-arachnoidal facial trunk that promotes facial palsy after cerebellopontine angle surgery in rats.

  1. Design and development of an upper extremity motion capture system for a rehabilitation robot.

    PubMed

    Nanda, Pooja; Smith, Alan; Gebregiorgis, Adey; Brown, Edward E

    2009-01-01

    Human robot interaction is a new and rapidly growing field and its application in the realm of rehabilitation and physical care is a major focus area of research worldwide. This paper discusses the development and implementation of a wireless motion capture system for the human arm which can be used for physical therapy or real-time control of a robotic arm, among many other potential applications. The system is comprised of a mechanical brace with rotary potentiometers inserted at the different joints to capture position data. It also contains surface electrodes which acquire electromyographic signals through the CleveMed BioRadio device. The brace interfaces with a software subsystem which displays real time data signals. The software includes a 3D arm model which imitates the actual movement of a subject's arm under testing. This project began as part of the Rochester Institute of Technology's Undergraduate Multidisciplinary Senior Design curriculum and has been integrated into the overall research objectives of the Biomechatronic Learning Laboratory.

  2. Markerless motion capture systems as training device in neurological rehabilitation: a systematic review of their use, application, target population and efficacy.

    PubMed

    Knippenberg, Els; Verbrugghe, Jonas; Lamers, Ilse; Palmaers, Steven; Timmermans, Annick; Spooren, Annemie

    2017-06-24

    Client-centred task-oriented training is important in neurological rehabilitation but is time consuming and costly in clinical practice. The use of technology, especially motion capture systems (MCS) which are low cost and easy to apply in clinical practice, may be used to support this kind of training, but knowledge and evidence of their use for training is scarce. The present review aims to investigate 1) which motion capture systems are used as training devices in neurological rehabilitation, 2) how they are applied, 3) in which target population, 4) what the content of the training and 5) efficacy of training with MCS is. A computerised systematic literature review was conducted in four databases (PubMed, Cinahl, Cochrane Database and IEEE). The following MeSH terms and key words were used: Motion, Movement, Detection, Capture, Kinect, Rehabilitation, Nervous System Diseases, Multiple Sclerosis, Stroke, Spinal Cord, Parkinson Disease, Cerebral Palsy and Traumatic Brain Injury. The Van Tulder's Quality assessment was used to score the methodological quality of the selected studies. The descriptive analysis is reported by MCS, target population, training parameters and training efficacy. Eighteen studies were selected (mean Van Tulder score = 8.06 ± 3.67). Based on methodological quality, six studies were selected for analysis of training efficacy. Most commonly used MCS was Microsoft Kinect, training was mostly conducted in upper limb stroke rehabilitation. Training programs varied in intensity, frequency and content. None of the studies reported an individualised training program based on client-centred approach. Motion capture systems are training devices with potential in neurological rehabilitation to increase the motivation during training and may assist improvement on one or more International Classification of Functioning, Disability and Health (ICF) levels. Although client-centred task-oriented training is important in neurological rehabilitation

  3. Involvement of the ventral premotor cortex in controlling image motion of the hand during performance of a target-capturing task.

    PubMed

    Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun

    2005-07-01

    The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.

  4. Influence of gravity upon some facial signs.

    PubMed

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  5. Anthropometric Study of Three-Dimensional Facial Morphology in Malay Adults

    PubMed Central

    Majawit, Lynnora Patrick; Mohd Razi, Roziana

    2016-01-01

    Objectives To establish the three-dimensional (3D) facial soft tissue morphology of adult Malaysian subjects of the Malay ethnic group; and to determine the morphological differences between the genders, using a non-invasive stereo-photogrammetry 3D camera. Material and Methods One hundred and nine subjects participated in this research, 54 Malay men and 55 Malay women, aged 20–30 years old with healthy BMI and with no adverse skeletal deviation. Twenty-three facial landmarks were identified on 3D facial images captured using a VECTRA M5-360 Head System (Canfield Scientific Inc, USA). Two angular, 3 ratio and 17 linear measurements were identified using Canfield Mirror imaging software. Intra- and inter-examiner reliability tests were carried out using 10 randomly selected images, analyzed using the intra-class correlation coefficient (ICC). Multivariate analysis of variance (MANOVA) was carried out to investigate morphologic differences between genders. Results ICC scores were generally good for both intra-examiner (range 0.827–0.987) and inter-examiner reliability (range 0.700–0.983) tests. Generally, all facial measurements were larger in men than women, except the facial profile angle which was larger in women. Clinically significant gender dimorphisms existed in biocular width, nose height, nasal bridge length, face height and lower face height values (mean difference > 3mm). Clinical significance was set at 3mm. Conclusion Facial soft tissue morphological values can be gathered efficiently and measured effectively from images captured by a non-invasive stereo-photogrammetry 3D camera. Adult men in Malaysia when compared to women had a wider distance between the eyes, a longer and more prominent nose and a longer face. PMID:27706220

  6. The efficacy of interactive, motion capture-based rehabilitation on functional outcomes in an inpatient stroke population: a randomized controlled trial.

    PubMed

    Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran Dk; Bird, Marie-Louise

    2018-02-01

    To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Randomized controlled clinical trial. Two subacute hospital rehabilitation units in Australia. In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8-40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (-3.0 to 5.0) cm) with no difference between groups ( P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) ( P = 0.04). No adverse events were recorded during therapy. Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant.

  7. The efficacy of interactive, motion capture-based rehabilitation on functional outcomes in an inpatient stroke population: a randomized controlled trial

    PubMed Central

    Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran DK; Bird, Marie-Louise

    2017-01-01

    Objective: To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Design: Randomized controlled clinical trial. Setting: Two subacute hospital rehabilitation units in Australia. Participants: In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Interventions: Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8–40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Main measures: Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Results: Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (−3.0 to 5.0) cm) with no difference between groups (P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) (P = 0.04). No adverse events were recorded during therapy. Conclusion: Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant. PMID:28719977

  8. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Monoscopic photogrammetry to obtain 3D models by a mobile device: a method for making facial prostheses.

    PubMed

    Salazar-Gamarra, Rodrigo; Seelaus, Rosemary; da Silva, Jorge Vicente Lopes; da Silva, Airton Moreira; Dib, Luciano Lauria

    2016-05-25

    The aim of this study is to present the development of a new technique to obtain 3D models using photogrammetry by a mobile device and free software, as a method for making digital facial impressions of patients with maxillofacial defects for the final purpose of 3D printing of facial prostheses. With the use of a mobile device, free software and a photo capture protocol, 2D captures of the anatomy of a patient with a facial defect were transformed into a 3D model. The resultant digital models were evaluated for visual and technical integrity. The technical process and resultant models were described and analyzed for technical and clinical usability. Generating 3D models to make digital face impressions was possible by the use of photogrammetry with photos taken by a mobile device. The facial anatomy of the patient was reproduced by a *.3dp and a *.stl file with no major irregularities. 3D printing was possible. An alternative method for capturing facial anatomy is possible using a mobile device for the purpose of obtaining and designing 3D models for facial rehabilitation. Further studies must be realized to compare 3D modeling among different techniques and systems. Free software and low cost equipment could be a feasible solution to obtain 3D models for making digital face impressions for maxillofacial prostheses, improving access for clinical centers that do not have high cost technology considered as a prior acquisition.

  10. Filling gaps in visual motion for target capture

    PubMed Central

    Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  11. Filling gaps in visual motion for target capture.

    PubMed

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  12. Using a motion capture system for spatial localization of EEG electrodes

    PubMed Central

    Reis, Pedro M. R.; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  13. Infant brain activity while viewing facial movement of point-light displays as measured by near-infrared spectroscopy (NIRS).

    PubMed

    Ichikawa, Hiroko; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke

    2010-09-27

    Adult observers can quickly identify specific actions performed by an invisible actor from the points of lights attached to the actor's head and major joints. Infants are also sensitive to biological motion and prefer to see it depicted by a dynamic point-light display. In detecting biological motion such as whole body and facial movements, neuroimaging studies have demonstrated the involvement of the occipitotemporal cortex, including the superior temporal sulcus (STS). In the present study, we used the point-light display technique and near-infrared spectroscopy (NIRS) to examine infant brain activity while viewing facial biological motion depicted in a point-light display. Dynamic facial point-light displays (PLD) were made from video recordings of three actors making a facial expression of surprise in a dark room. As in Bassili's study, about 80 luminous markers were scattered over the surface of the actor's faces. In the experiment, we measured infant's hemodynamic responses to these displays using NIRS. We hypothesized that infants would show different neural activity for upright and inverted PLD. The responses were compared to the baseline activation during the presentation of individual still images, which were frames extracted from the dynamic PLD. We found that the concentration of oxy-Hb increased in the right temporal area during the presentation of the upright PLD compared to that of the baseline period. This is the first study to demonstrate that infant's brain activity in face processing is induced only by the motion cue of facial movement depicted by dynamic PLD. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  15. A 3-dimensional anthropometric evaluation of facial morphology among Chinese and Greek population.

    PubMed

    Liu, Yun; Kau, Chung How; Pan, Feng; Zhou, Hong; Zhang, Qiang; Zacharopoulos, Georgios Vasileiou

    2013-07-01

    The use of 3-dimensional (3D) facial imaging has taken greater importance as orthodontists use the soft tissue paradigm in the evaluation of skeletal disproportion. Studies have shown that faces defer in populations. To date, no anthropometric evaluations have been made of Chinese and Greek faces. The aim of this study was to compare facial morphologies of Greeks and Chinese using 3D facial anthropometric landmarks. Three-dimensional facial images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMD face system captured 245 subjects from 2 population groups (Chinese [n = 72] and Greek [n = 173]), and each population was categorized into male and female groups for evaluation. All subjects in the group were between 18 and 30 years old and had no apparent facial anomalies. Twenty-five anthropometric landmarks were identified on the 3D faces of each subject. Soft tissue nasion was set as the "zeroed" reference landmark. Twenty landmark distances were constructed and evaluated within 3 dimensions of space. Six angles, 4 proportions, and 1 construct were also calculated. Student t test was used to analyze each data set obtained within each subgroup. Distinct facial differences were noted between the subgroups evaluated. When comparing differences of sexes in 2 populations (eg, male Greeks and male Chinese), significant differences were noted in more than 80% of the landmark distances calculated. One hundred percent of the angular were significant, and the Chinese were broader in width to height facial proportions. In evaluating the lips to the esthetic line, the Chinese population had more protrusive lips. There are differences in the facial morphologies of subjects obtained from a Chinese population versus that of a Greek population.

  16. Evaluation of a video-based head motion tracking system for dedicated brain PET

    NASA Astrophysics Data System (ADS)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  17. Multimedia Content Development as a Facial Expression Datasets for Recognition of Human Emotions

    NASA Astrophysics Data System (ADS)

    Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.

    2018-02-01

    Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.

  18. EFFECTS OF TURBULENCE, ECCENTRICITY DAMPING, AND MIGRATION RATE ON THE CAPTURE OF PLANETS INTO MEAN MOTION RESONANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M.

    2011-01-01

    Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasingmore » levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence.« less

  19. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points

  20. Facial Scar Revision: Understanding Facial Scar Treatment

    MedlinePlus

    ... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...

  1. High-emulation mask recognition with high-resolution hyperspectral video capture system

    NASA Astrophysics Data System (ADS)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  2. Synthesis of Speaker Facial Movement to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.

    1994-01-01

    A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.

  3. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available tomore » other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.« less

  4. Using Xbox kinect motion capture technology to improve clinical rehabilitation outcomes for balance and cardiovascular health in an individual with chronic TBI.

    PubMed

    Chanpimol, Shane; Seamon, Bryant; Hernandez, Haniel; Harris-Love, Michael; Blackman, Marc R

    2017-01-01

    Motion capture virtual reality-based rehabilitation has become more common. However, therapists face challenges to the implementation of virtual reality (VR) in clinical settings. Use of motion capture technology such as the Xbox Kinect may provide a useful rehabilitation tool for the treatment of postural instability and cardiovascular deconditioning in individuals with chronic severe traumatic brain injury (TBI). The primary purpose of this study was to evaluate the effects of a Kinect-based VR intervention using commercially available motion capture games on balance outcomes for an individual with chronic TBI. The secondary purpose was to assess the feasibility of this intervention for eliciting cardiovascular adaptations. A single system experimental design ( n = 1) was utilized, which included baseline, intervention, and retention phases. Repeated measures were used to evaluate the effects of an 8-week supervised exercise intervention using two Xbox One Kinect games. Balance was characterized using the dynamic gait index (DGI), functional reach test (FRT), and Limits of Stability (LOS) test on the NeuroCom Balance Master. The LOS assesses end-point excursion (EPE), maximal excursion (MXE), and directional control (DCL) during weight-shifting tasks. Cardiovascular and activity measures were characterized by heart rate at the end of exercise (HRe), total gameplay time (TAT), and time spent in a therapeutic heart rate (TTR) during the Kinect intervention. Chi-square and ANOVA testing were used to analyze the data. Dynamic balance, characterized by the DGI, increased during the intervention phase χ 2 (1, N = 12) = 12, p = .001. Static balance, characterized by the FRT showed no significant changes. The EPE increased during the intervention phase in the backward direction χ 2 (1, N = 12) = 5.6, p = .02, and notable improvements of DCL were demonstrated in all directions. HRe ( F (2,174) = 29.65, p = < .001) and time in a TTR ( F (2, 12) = 4.19, p = .04) decreased

  5. Clinically acceptable agreement between the ViMove wireless motion sensor system and the Vicon motion capture system when measuring lumbar region inclination motion in the sagittal and coronal planes.

    PubMed

    Mjøsund, Hanne Leirbekk; Boyle, Eleanor; Kjaer, Per; Mieritz, Rune Mygind; Skallgård, Tue; Kent, Peter

    2017-03-21

    Wireless, wearable, inertial motion sensor technology introduces new possibilities for monitoring spinal motion and pain in people during their daily activities of work, rest and play. There are many types of these wireless devices currently available but the precision in measurement and the magnitude of measurement error from such devices is often unknown. This study investigated the concurrent validity of one inertial motion sensor system (ViMove) for its ability to measure lumbar inclination motion, compared with the Vicon motion capture system. To mimic the variability of movement patterns in a clinical population, a sample of 34 people were included - 18 with low back pain and 16 without low back pain. ViMove sensors were attached to each participant's skin at spinal levels T12 and S2, and Vicon surface markers were attached to the ViMove sensors. Three repetitions of end-range flexion inclination, extension inclination and lateral flexion inclination to both sides while standing were measured by both systems concurrently with short rest periods in between. Measurement agreement through the whole movement range was analysed using a multilevel mixed-effects regression model to calculate the root mean squared errors and the limits of agreement were calculated using the Bland Altman method. We calculated root mean squared errors (standard deviation) of 1.82° (±1.00°) in flexion inclination, 0.71° (±0.34°) in extension inclination, 0.77° (±0.24°) in right lateral flexion inclination and 0.98° (±0.69°) in left lateral flexion inclination. 95% limits of agreement ranged between -3.86° and 4.69° in flexion inclination, -2.15° and 1.91° in extension inclination, -2.37° and 2.05° in right lateral flexion inclination and -3.11° and 2.96° in left lateral flexion inclination. We found a clinically acceptable level of agreement between these two methods for measuring standing lumbar inclination motion in these two cardinal movement planes. Further

  6. DNA Motion Capture Reveals the Mechanical Properties of DNA at the Mesoscale

    PubMed Central

    Price, Allen C.; Pilkiewicz, Kevin R.; Graham, Thomas G.W.; Song, Dan; Eaves, Joel D.; Loparo, Joseph J.

    2015-01-01

    Single-molecule studies probing the end-to-end extension of long DNAs have established that the mechanical properties of DNA are well described by a wormlike chain force law, a polymer model where persistence length is the only adjustable parameter. We present a DNA motion-capture technique in which DNA molecules are labeled with fluorescent quantum dots at specific sites along the DNA contour and their positions are imaged. Tracking these positions in time allows us to characterize how segments within a long DNA are extended by flow and how fluctuations within the molecule are correlated. Utilizing a linear response theory of small fluctuations, we extract elastic forces for the different, ∼2-μm-long segments along the DNA backbone. We find that the average force-extension behavior of the segments can be well described by a wormlike chain force law with an anomalously small persistence length. PMID:25992731

  7. A wearable device for emotional recognition using facial expression and physiological response.

    PubMed

    Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim

    2016-08-01

    This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.

  8. [Experimental studies for the improvement of facial nerve regeneration].

    PubMed

    Guntinas-Lichius, O; Angelov, D N

    2008-02-01

    Using a combination of the following, it is possible to investigate procedures to improve the morphological and functional regeneration of the facial nerve in animal models: 1) retrograde fluorescence tracing to analyse collateral axonal sprouting and the selectivity of reinnervation of the mimic musculature, 2) immunohistochemistry to analyse both the terminal axonal sprouting in the muscles and the axon reaction within the nucleus of the facial nerve, the peripheral nerve, and its environment, and 3) digital motion analysis of the muscles. To obtain good functional facial nerve regeneration, a reduction of terminal sprouting in the mimic musculature seems to be more important than a reduction of collateral sprouting at the lesion site. Promising strategies include acceleration of nerve regeneration, forced induced use of the paralysed face, mechanical stimulation of the face, and transplantation of nerve-growth-promoting olfactory epithelium at the lesion site.

  9. Human Actions Analysis: Templates Generation, Matching and Visualization Applied to Motion Capture of Highly-Skilled Karate Athletes

    PubMed Central

    Piekarczyk, Marcin; Ogiela, Marek R.

    2017-01-01

    The aim of this paper is to propose and evaluate the novel method of template generation, matching, comparing and visualization applied to motion capture (kinematic) analysis. To evaluate our approach, we have used motion capture recordings (MoCap) of two highly-skilled black belt karate athletes consisting of 560 recordings of various karate techniques acquired with wearable sensors. We have evaluated the quality of generated templates; we have validated the matching algorithm that calculates similarities and differences between various MoCap data; and we have examined visualizations of important differences and similarities between MoCap data. We have concluded that our algorithms works the best when we are dealing with relatively short (2–4 s) actions that might be averaged and aligned with the dynamic time warping framework. In practice, the methodology is designed to optimize the performance of some full body techniques performed in various sport disciplines, for example combat sports and martial arts. We can also use this approach to generate templates or to compare the correct performance of techniques between various top sportsmen in order to generate a knowledge base of reference MoCap videos. The motion template generated by our method can be used for action recognition purposes. We have used the DTW classifier with angle-based features to classify various karate kicks. We have performed leave-one-out action recognition for the Shorin-ryu and Oyama karate master separately. In this case, 100% actions were correctly classified. In another experiment, we used templates generated from Oyama master recordings to classify Shorin-ryu master recordings and vice versa. In this experiment, the overall recognition rate was 94.2%, which is a very good result for this type of complex action. PMID:29125560

  10. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  11. Asynchronous beating of cilia enhances particle capture rate

    NASA Astrophysics Data System (ADS)

    Ding, Yang; Kanso, Eva

    2014-11-01

    Many aquatic micro-organisms use beating cilia to generate feeding currents and capture particles in surrounding fluids. One of the capture strategies is to ``catch up'' with particles when a cilium is beating towards the overall flow direction (effective stroke) and intercept particles on the downstream side of the cilium. Here, we developed a 3D computational model of a cilia band with prescribed motion in a viscous fluid and calculated the trajectories of the particles with different sizes in the fluid. We found an optimal particle diameter that maximizes the capture rate. The flow field and particle motion indicate that the low capture rate of smaller particles is due to the laminar flow in the neighbor of the cilia, whereas larger particles have to move above the cilia tips to get advected downstream which decreases their capture rate. We then analyzed the effect of beating coordination between neighboring cilia on the capture rate. Interestingly, we found that asynchrony of the beating of the cilia can enhance the relative motion between a cilium and the particles near it and hence increase the capture rate.

  12. Anatomy of emotion: a 3D study of facial mimicry.

    PubMed

    Ferrario, V F; Sforza, C

    2007-01-01

    Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.

  13. Combining EEG, MIDI, and motion capture techniques for investigating musical performance.

    PubMed

    Maidhof, Clemens; Kästner, Torsten; Makkonen, Tommi

    2014-03-01

    This article describes a setup for the simultaneous recording of electrophysiological data (EEG), musical data (MIDI), and three-dimensional movement data. Previously, each of these three different kinds of measurements, conducted sequentially, has been proven to provide important information about different aspects of music performance as an example of a demanding multisensory motor skill. With the method described here, it is possible to record brain-related activity and movement data simultaneously, with accurate timing resolution and at relatively low costs. EEG and MIDI data were synchronized with a modified version of the FTAP software, sending synchronization signals to the EEG recording device simultaneously with keypress events. Similarly, a motion capture system sent synchronization signals simultaneously with each recorded frame. The setup can be used for studies investigating cognitive and motor processes during music performance and music-like tasks--for example, in the domains of motor control, learning, music therapy, or musical emotions. Thus, this setup offers a promising possibility of a more behaviorally driven analysis of brain activity.

  14. Self-adaptive signals separation for non-contact heart rate estimation from facial video in realistic environments.

    PubMed

    Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan

    2018-06-05

    Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.

  15. Ubiquitous human upper-limb motion estimation using wearable sensors.

    PubMed

    Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang

    2011-07-01

    Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.

  16. [Effects of a Facial Muscle Exercise Program including Facial Massage for Patients with Facial Palsy].

    PubMed

    Choi, Hyoung Ju; Shin, Sung Hee

    2016-08-01

    The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.

  17. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  18. Attentional cueing: fearful body postures capture attention with saccades.

    PubMed

    Bannerman, Rachel L; Milders, Maarten; Sahraie, Arash

    2010-05-01

    According to theories of attention and emotion, threat-related stimuli (e.g., negative facial expressions) capture and hold attention. Despite these theories, previous examination of attentional cueing by threat showed no enhanced capture at brief durations. One explanation for the absence of attentional capture effects may be related to the sensitivity of the manual response measure employed. Here we extended beyond facial expressions and investigated the time course of orienting attention towards fearful body postures in the exogenous cueing task. Cue duration (20, 40, 60, or 100 ms), orientation (upright or inverted), and response mode (saccadic eye movement or manual keypress) were manipulated across three experiments. In the saccade mode, both enhanced attentional capture and impaired disengagement from fearful bodies were evident and limited to rapid cue durations (20 and 40 ms), suggesting that saccadic cueing effects emerge rapidly and are short lived. In the manual mode, fearful bodies impacted only upon the disengagement component of attention at 100 ms, suggesting that manual cueing effects emerge over longer periods of time. No cueing modulation was found for inverted presentation, suggesting that valence, not low-level image confounds, was responsible for the cueing effects. Importantly, saccades could reveal threat biases at brief cue durations consistent with current theories of emotion and attention.

  19. DNA motion capture reveals the mechanical properties of DNA at the mesoscale.

    PubMed

    Price, Allen C; Pilkiewicz, Kevin R; Graham, Thomas G W; Song, Dan; Eaves, Joel D; Loparo, Joseph J

    2015-05-19

    Single-molecule studies probing the end-to-end extension of long DNAs have established that the mechanical properties of DNA are well described by a wormlike chain force law, a polymer model where persistence length is the only adjustable parameter. We present a DNA motion-capture technique in which DNA molecules are labeled with fluorescent quantum dots at specific sites along the DNA contour and their positions are imaged. Tracking these positions in time allows us to characterize how segments within a long DNA are extended by flow and how fluctuations within the molecule are correlated. Utilizing a linear response theory of small fluctuations, we extract elastic forces for the different, ∼2-μm-long segments along the DNA backbone. We find that the average force-extension behavior of the segments can be well described by a wormlike chain force law with an anomalously small persistence length. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Laptop Computer - Based Facial Recognition System Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master

  1. Deficient functional recovery after facial nerve crush in rats is associated with restricted rearrangements of synaptic terminals in the facial nucleus.

    PubMed

    Hundeshagen, G; Szameit, K; Thieme, H; Finkensieper, M; Angelov, D N; Guntinas-Lichius, O; Irintchev, A

    2013-09-17

    Crush injuries of peripheral nerves typically lead to axonotmesis, axonal damage without disruption of connective tissue sheaths. Generally, human patients and experimental animals recover well after axonotmesis and the favorable outcome has been attributed to precise axonal reinnervation of the original peripheral targets. Here we assessed functionally and morphologically the long-term consequences of facial nerve axonotmesis in rats. Expectedly, we found that 5 months after crush or cryogenic nerve lesion, the numbers of motoneurons with regenerated axons and their projection pattern into the main branches of the facial nerve were similar to those in control animals suggesting precise target reinnervation. Unexpectedly, however, we found that functional recovery, estimated by vibrissal motion analysis, was incomplete at 2 months after injury and did not improve thereafter. The maximum amplitude of whisking remained substantially, by more than 30% lower than control values even 5 months after axonotmesis. Morphological analyses showed that the facial motoneurons ipsilateral to injury were innervated by lower numbers of glutamatergic terminals (-15%) and cholinergic perisomatic boutons (-26%) compared with the contralateral non-injured motoneurons. The structural deficits were correlated with functional performance of individual animals and associated with microgliosis in the facial nucleus but not with polyinnervation of muscle fibers. These results support the idea that restricted CNS plasticity and insufficient afferent inputs to motoneurons may substantially contribute to functional deficits after facial nerve injuries, possibly including pathologic conditions in humans like axonotmesis in idiopathic facial nerve (Bell's) palsy. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. He Throws like a Girl (but Only when He's Sad): Emotion Affects Sex-Decoding of Biological Motion Displays

    ERIC Educational Resources Information Center

    Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.

    2011-01-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…

  3. [Facial palsy].

    PubMed

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  4. Sound-induced facial synkinesis following facial nerve paralysis.

    PubMed

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  5. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  6. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  7. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  8. The effect of age and sex on facial mimicry: a three-dimensional study in healthy adults.

    PubMed

    Sforza, C; Mapelli, A; Galante, D; Moriconi, S; Ibba, T M; Ferraro, L; Ferrario, V F

    2010-10-01

    To assess sex- and age-related characteristics in standardized facial movements, 40 healthy adults (20 men, 20 women; aged 20-50 years) performed seven standardized facial movements (maximum smile; free smile; "surprise" with closed mouth; "surprise" with open mouth; eye closure; right- and left-side eye closures). The three-dimensional coordinates of 21 soft tissue facial landmarks were recorded by a motion analyser, their movements computed, and asymmetry indices calculated. Within each movement, total facial mobility was independent from sex and age (analysis of variance, p>0.05). Asymmetry indices of the eyes and mouth were similar in both sexes (p>0.05). Age significantly influenced eye and mouth asymmetries of the right-side eye closure, and eye asymmetry of the surprise movement. On average, the asymmetry indices of the symmetric movements were always lower than 8%, and most did not deviate from the expected value of 0 (Student's t). Larger asymmetries were found for the asymmetric eye closures (eyes, up to 50%, p<0.05; mouth, up to 30%, p<0.05 only in the 20-30-year-old subjects). In conclusion, sex and age had a limited influence on total facial motion and asymmetry in normal adult men and women. Copyright © 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  10. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  11. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    PubMed

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  12. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance

    PubMed Central

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655

  13. Eyelid reanimation with gold weight implant and tendon sling suspension: evaluation of excursion and velocity using the FACIAL CLIMA system.

    PubMed

    Hontanilla, Bernardo; Marre, Diego

    2013-04-01

    This study aims to analyse the efficacy of static techniques, namely gold weight implant and tendon sling, in the reanimation of the paralytic eyelid. Upper eyelid rehabilitation in terms of excursion and blinking velocity is performed using the automatic motion capture system, FACIAL CLIMA. Seventy-four patients underwent a total of 101 procedures including 58 upper eyelid gold weight implants and 43 lower eyelid tendon suspension with 27 patients undergoing both procedures. The presence of lagophtalmos, eye dryness, corneal ulcer, epiphora and lower lid ptosis/ectropion was assessed preoperatively. The Wilcoxon signed-rank test was used to compare preoperative versus postoperative measurements of upper eyelid excursion and blinking velocity determined with FACIAL CLIMA. Significance was set at p <0.05. FACIAL CLIMA revealed significant improvement of eyelid excursion and velocity of blinking (p < 0.001). Eye dryness improved in 49 patients (90.7%) and corneal ulcer resolved without any further treatment in 12 (85.7%) of those with a gold weight inserted. Implant extrusion was observed in 8.6% of the cases. Of the patients with lower lid tendon suspension, correction of ptosis/ectropion and epiphora was achieved in 93.9% and 91.9% of cases, respectively. In eight patients (18.6%), further surgery was needed to adjust tendon tension. The paralytic upper and lower eyelid can be successfully managed with gold weight implant and tendon suspension. The FACIAL CLIMA system is a reliable method to quantify upper eyelid excursion and blinking velocity and to detect the exact position of the lower eyelid. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  15. Objective grading of facial paralysis using Local Binary Patterns in video processing.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F

    2008-01-01

    This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  16. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    PubMed

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  17. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  18. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  19. Motion visualization and estimation for flapping wing systems

    NASA Astrophysics Data System (ADS)

    Hsu, Tzu-Sheng Shane; Fitzgerald, Timothy; Nguyen, Vincent Phuc; Patel, Trisha; Balachandran, Balakumar

    2017-04-01

    Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. As a case study, motion capture of a free flying Manduca sexta, also known as hawkmoth, is considered by using three synchronized high-speed cameras. A solid finite element (FE) representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. One of the original aspects of this work is the formulation of an objective function and the use of shadow matching and strain-energy regularization. With this objective function, the authors penalize the projection differences between silhouettes of the captured images and the FE representation of the deformed body. The process and procedures undertaken to go from high-speed videography to motion estimation are discussed, and snapshots of representative results are presented. Finally, the captured free-flight motion is also characterized and quantified.

  20. Vision-based system identification technique for building structures using a motion capture system

    NASA Astrophysics Data System (ADS)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  1. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  2. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  3. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  4. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  5. Motion cues that make an impression: Predicting perceived personality by minimal motion information.

    PubMed

    Koppensteiner, Markus

    2013-11-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.

  6. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    PubMed

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  7. Facial approximation-from facial reconstruction synonym to face prediction paradigm.

    PubMed

    Stephan, Carl N

    2015-05-01

    Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.

  8. Study of human body: Kinematics and kinetics of a martial arts (Silat) performers using 3D-motion capture

    NASA Astrophysics Data System (ADS)

    Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi

    2015-04-01

    The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.

  9. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  10. Markerless motion estimation for motion-compensated clinical brain imaging

    NASA Astrophysics Data System (ADS)

    Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.

    2018-05-01

    Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds

  11. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    PubMed Central

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216

  12. (abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, Kenneth C.

    1994-01-01

    We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.

  13. Three-Dimensional Anthropometric Evaluation of Facial Morphology.

    PubMed

    Celebi, Ahmet Arif; Kau, Chung How; Ozaydin, Bunyamin

    2017-07-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Colombian and Mexican-American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface system, which captured 223 subjects from 2 population groups of Colombians (n = 131) and Mexican-Americans (n = 92). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 21 anthropometric landmarks were identified on the 3-dimensional faces of each subject. The independent t test was used to analyze each data set obtained within each subgroup. The Colombian males showed significantly greater width of the outercanthal width, eye fissure length, and orbitale than the Colombian females. The Colombian females had significantly smaller lip and mouth measurements for all distances except upper vermillion height than Colombian males. The Mexican-American females had significantly smaller measurements with regard to the nose than Mexican-American males. Meanwhile, the heights of the face, the upper face, the lower face, and the mandible were all significantly less in the Mexican-American females. The intercanthal and outercanthal widths were significantly greater in the Mexican-American males and females. Meanwhile, the orbitale distance of Mexican-American sexes was significantly smaller than those of the Colombian males and females. The Mexican-American group had significantly larger nose width and length of alare than the Colombian group regarding both sexes. With respect to the nasal tip protrusion and nose height, they were significantly smaller in the Colombian females than in the Mexican-American females. The face width was significantly greater in the Colombian males and females. Sexual dimorphism for facial features was presented in both the

  14. He throws like a girl (but only when he's sad): emotion affects sex-decoding of biological motion displays.

    PubMed

    Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E

    2011-05-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    PubMed

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p < 0.05). Further, the interaction of methods versus fiducial markers also showed significant difference (p < 0.05). The CBCT and facial moulage method showed the greatest accuracy. 3DP models fabricated using 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT

  16. Automatic image assessment from facial attributes

    NASA Astrophysics Data System (ADS)

    Ptucha, Raymond; Kloosterman, David; Mittelstaedt, Brian; Loui, Alexander

    2013-03-01

    Personal consumer photography collections often contain photos captured by numerous devices stored both locally and via online services. The task of gathering, organizing, and assembling still and video assets in preparation for sharing with others can be quite challenging. Current commercial photobook applications are mostly manual-based requiring significant user interactions. To assist the consumer in organizing these assets, we propose an automatic method to assign a fitness score to each asset, whereby the top scoring assets are used for product creation. Our method uses cues extracted from analyzing pixel data, metadata embedded in the file, as well as ancillary tags or online comments. When a face occurs in an image, its features have a dominating influence on both aesthetic and compositional properties of the displayed image. As such, this paper will emphasize the contributions faces have on affecting the overall fitness score of an image. To understand consumer preference, we conducted a psychophysical study that spanned 27 judges, 5,598 faces, and 2,550 images. Preferences on a per-face and per-image basis were independently gathered to train our classifiers. We describe how to use machine learning techniques to merge differing facial attributes into a single classifier. Our novel methods of facial weighting, fusion of facial attributes, and dimensionality reduction produce stateof- the-art results suitable for commercial applications.

  17. Easy facial analysis using the facial golden mask.

    PubMed

    Kim, Yong-Ha

    2007-05-01

    For over 2000 years, many artists and scientists have tried to understand or quantify the form of the perfect, ideal, or most beautiful face both in art and in vivo (life). A mathematical relationship has been consistently and repeatedly reported to be present in beautiful things. This particular relationship is the golden ratio. It is a mathematical ratio of 1.618:1 that seems to appear recurrently in beautiful things in nature as well as in other things that are seen as beautiful. Dr. Marquardt made the facial golden mask that contains and includes all of the one-dimensional and two-dimensional geometric golden elements formed from the golden ratio. The purpose of this study is to evaluate the usefulness of the golden facial mask. In 40 cases, the authors applied the facial golden mask to preoperative and postoperative photographs and scored each photograph on a 1 to 5 scale from the perspective of their personal aesthetic views. The score was lower when the facial deformity was severe, whereas it was higher when the face was attractive. Compared with the average scores of facial mask applied photographs and nonapplied photographs using a nonparametric test, statistical significance was not reached (P > 0.05). This implies that the facial golden mask may be used as an analytical tool. The facial golden mask is easy to apply, inexpensive, and relatively objective. Therefore, the authors introduce it as a useful facial analysis.

  18. Facial expressions and the evolution of the speech rhythm.

    PubMed

    Ghazanfar, Asif A; Takahashi, Daniel Y

    2014-06-01

    In primates, different vocalizations are produced, at least in part, by making different facial expressions. Not surprisingly, humans, apes, and monkeys all recognize the correspondence between vocalizations and the facial postures associated with them. However, one major dissimilarity between monkey vocalizations and human speech is that, in the latter, the acoustic output and associated movements of the mouth are both rhythmic (in the 3- to 8-Hz range) and tightly correlated, whereas monkey vocalizations have a similar acoustic rhythmicity but lack the concommitant rhythmic facial motion. This raises the question of how we evolved from a presumptive ancestral acoustic-only vocal rhythm to the one that is audiovisual with improved perceptual sensitivity. According to one hypothesis, this bisensory speech rhythm evolved through the rhythmic facial expressions of ancestral primates. If this hypothesis has any validity, we expect that the extant nonhuman primates produce at least some facial expressions with a speech-like rhythm in the 3- to 8-Hz frequency range. Lip smacking, an affiliative signal observed in many genera of primates, satisfies this criterion. We review a series of studies using developmental, x-ray cineradiographic, EMG, and perceptual approaches with macaque monkeys producing lip smacks to further investigate this hypothesis. We then explore its putative neural basis and remark on important differences between lip smacking and speech production. Overall, the data support the hypothesis that lip smacking may have been an ancestral expression that was linked to vocal output to produce the original rhythmic audiovisual speech-like utterances in the human lineage.

  19. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    PubMed

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. A study on validating KinectV2 in comparison of Vicon system as a motion capture system for using in Health Engineering in industry

    NASA Astrophysics Data System (ADS)

    Jebeli, Mahvash; Bilesan, Alireza; Arshi, Ahmadreza

    2017-06-01

    The currently available commercial motion capture systems are constrained by space requirement and thus pose difficulties when used in developing kinematic description of human movements within the existing manufacturing and production cells. The Kinect sensor does not share similar limitations but it is not as accurate. The proposition made in this article is to adopt the Kinect sensor in to facilitate implementation of Health Engineering concepts to industrial environments. This article is an evaluation of the Kinect sensor accuracy when providing three dimensional kinematic data. The sensor is thus utilized to assist in modeling and simulation of worker performance within an industrial cell. For this purpose, Kinect 3D data was compared to that of Vicon motion capture system in a gait analysis laboratory. Results indicated that the Kinect sensor exhibited a coefficient of determination of 0.9996 on the depth axis and 0.9849 along the horizontal axis and 0.2767 on vertical axis. The results prove the competency of the Kinect sensor to be used in the industrial environments.

  1. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  2. Facial neuropathy with imaging enhancement of the facial nerve: a case report

    PubMed Central

    Mumtaz, Sehreen; Jensen, Matthew B

    2014-01-01

    A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155

  3. A Multivariate Analysis of Unilateral Cleft Lip and Palate Facial Skeletal Morphology.

    PubMed

    Starbuck, John M; Ghoneima, Ahmed; Kula, Katherine

    2015-07-01

    Unilateral cleft lip and palate (UCLP) occurs when the maxillary and nasal facial prominences fail to fuse correctly during development, resulting in a palatal cleft and clefted soft and hard tissues of the dentoalveolus. The UCLP deformity may compromise an individual's ability to eat, chew, and speak. In this retrospective cross-sectional study, cone beam computed tomography (CBCT) images of 7-17-year-old individuals born with UCLP (n = 24) and age- and sex-matched controls (n = 24) were assessed. Coordinate values of three-dimensional anatomical landmarks (n = 32) were recorded from each CBCT image. Data were evaluated using principal coordinates analysis (PCOORD) and Euclidean distance matrix analysis (EDMA). Approximately 40% of morphometric variation is captured by PCOORD axes 1-3, and the negative and positive ends of each axis are associated with specific patterns of morphological differences. Approximately 36% of facial skeletal measures significantly differ by confidence interval testing (α = 0.10) between samples. Although significant form differences occur across the facial skeleton, strong patterns of morphological differences were localized to the lateral and superioinferior aspects of the nasal aperture, particularly on the clefted side of the face. The UCLP deformity strongly influences facial skeletal morphology of the midface and oronasal facial regions, and to a lesser extent the upper and lower facial skeletons. The pattern of strong morphological differences in the oronasal region combined with differences across the facial complex suggests that craniofacial bones are integrated and covary, despite influences from the congenital cleft.

  4. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  5. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    NASA Astrophysics Data System (ADS)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  6. Measuring Facial Movement

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  7. Effect of a Facial Muscle Exercise Device on Facial Rejuvenation

    PubMed Central

    Hwang, Ui-jae; Kwon, Oh-yun; Jung, Sung-hoon; Ahn, Sun-hee; Gwak, Gyeong-tae

    2018-01-01

    Abstract Background The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. Objectives This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Methods Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. Results The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P < 0.001, left: P = 0.015), while the midfacial surface distances in the middle (right: P = 0.005, left: P = 0.047) and lower (right: P = 0.028, left: P = 0.019) planes as well as the jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. Conclusions FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. Level of Evidence: 4 PMID:29365050

  8. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  9. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  10. Facial Fractures.

    PubMed

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-06-01

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  11. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  12. Determining Underground Mining Work Postures Using Motion Capture and Digital Human Modeling

    PubMed Central

    Lutz, Timothy J.; DuCarme, Joseph P.; Smith, Adam K.; Ambrose, Dean

    2017-01-01

    According to Mine Safety and Health Administration (MSHA) data, during 2008–2012 in the U.S., there were, on average, 65 lost-time accidents per year during routine mining and maintenance activities involving remote-controlled continuous mining machines (CMMs). To address this problem, the National Institute for Occupational Safety and Health (NIOSH) is currently investigating the implementation and integration of existing and emerging technologies in underground mines to provide automated, intelligent proximity detection (iPD) devices on CMMs. One research goal of NIOSH is to enhance the proximity detection system by improving its capability to track and determine identity, position, and posture of multiple workers, and to selectively disable machine functions to keep workers and machine operators safe. Posture of the miner can determine the safe working distance from a CMM by way of the variation in the proximity detection magnetic field. NIOSH collected and analyzed motion capture data and calculated joint angles of the back, hips, and knees from various postures on 12 human subjects. The results of the analysis suggests that lower body postures can be identified by observing the changes in joint angles of the right hip, left hip, right knee, and left knee. PMID:28626796

  13. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  14. Evolution of the 3-dimensional video system for facial motion analysis: ten years' experiences and recent developments.

    PubMed

    Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred

    2012-08-01

    Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.

  15. A slowly moving foreground can capture an observer's self-motion--a report of a new motion illusion: inverted vection.

    PubMed

    Nakamura, S; Shimojo, S

    2000-01-01

    We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.

  16. FILLERS-Q: an instrument for assessing patient experiences after treatment with facial injectable soft tissue fillers.

    PubMed

    Sclafani, Anthony P; Pizzi, Laura; Jutkowitz, Eric; Mueller, Nancy; Jung, Matthew

    2010-08-01

    Patient-reported outcomes data are limited after injectable soft tissue filler treatment. Patient-reported outcome measures (PROMs) are becoming integral to medical practices in other specialties and will become so as well in facial plastic surgery. The obvious differences in types of disorders treated and the outcomes of primary importance seen between general medical/surgical and facial plastic surgery practices make institution of standard outcomes studies difficult in facial plastic surgery. However, understanding the patient's experience and satisfaction with treatment is essential to continue to provide excellent care to facial aesthetic patients. This article describes use of a new survey instrument, Facial Injectables: Longevity, Late and Early Reactions and Satisfaction Questionnaire (FILLERS-Q), in assessing patient response to facial injections of soft tissue fillers. FILLERS-Q is a 43-item questionnaire that captures patient demographics (4 items), patient satisfaction with treatment (10 items), procedure-related events (3 to 7 items), impact on relationships (9 to 15 items), and economic considerations related to dermal filler treatment (3 to 7 items). The results provide a "snapshot" of patients treated in an individual surgeon's practice. (c) Thieme Medical Publishers.

  17. Evaluation of performance, acceptance, and compliance of an auto-injector in healthy and rheumatoid arthritic subjects measured by a motion capture system.

    PubMed

    Xiao, Xiao; Li, Wei; Clawson, Corbin; Karvani, David; Sondag, Perceval; Hahn, James K

    2018-01-01

    The study aimed to develop a motion capture system that can track, visualize, and analyze the entire performance of self-injection with the auto-injector. Each of nine healthy subjects and 29 rheumatoid arthritic (RA) patients with different degrees of hand disability performed two simulated injections into an injection pad while six degrees of freedom (DOF) motions of the auto-injector and the injection pad were captured. We quantitatively measured the performance of the injection by calculating needle displacement from the motion trajectories. The max, mean, and SD of needle displacement were analyzed. Assessments of device acceptance and usability were evaluated by a survey questionnaire and independent observations of compliance with the device instruction for use (IFU). A total of 80 simulated injections were performed. Our results showed a similar level of performance among all the subjects with slightly larger, but not statistically significant, needle displacement in the RA group. In particular, no significant effects regarding previous experience in self-injection, grip method, pain in hand, and Cochin score in the RA group were found to have an impact on the mean needle displacement. Moreover, the analysis of needle displacement for different durations of injections indicated that most of the subjects reached their personal maximum displacement in 15 seconds and remained steady or exhibited a small amount of increase from 15 to 60 seconds. Device acceptance was high for most of the questions (ie, >4; >80%) based on a 0-5-point scale or percentage of acceptance. The overall compliance with the device IFU was high for the first injection (96.05%) and reached 98.02% for the second injection. We demonstrated the feasibility of tracking the motions of injection to measure the performance of simulated self-injection. The comparisons of needle displacement showed that even RA patients with severe hand disability could properly perform self-injection with this

  18. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  19. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  20. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  1. Self-Motion and the Shaping of Sensory Signals

    PubMed Central

    Jenks, Robert A.; Vaziri, Ashkan; Boloori, Ali-Reza

    2010-01-01

    Sensory systems must form stable representations of the external environment in the presence of self-induced variations in sensory signals. It is also possible that the variations themselves may provide useful information about self-motion relative to the external environment. Rats have been shown to be capable of fine texture discrimination and object localization based on palpation by facial vibrissae, or whiskers, alone. During behavior, the facial vibrissae brush against objects and undergo deflection patterns that are influenced both by the surface features of the objects and by the animal's own motion. The extent to which behavioral variability shapes the sensory inputs to this pathway is unknown. Using high-resolution, high-speed videography of unconstrained rats running on a linear track, we measured several behavioral variables including running speed, distance to the track wall, and head angle, as well as the proximal vibrissa deflections while the distal portions of the vibrissae were in contact with periodic gratings. The measured deflections, which serve as the sensory input to this pathway, were strongly modulated both by the properties of the gratings and the trial-to-trial variations in head-motion and locomotion. Using presumed internal knowledge of locomotion and head-rotation, gratings were classified using short-duration trials (<150 ms) from high-frequency vibrissa motion, and the continuous trajectory of the animal's own motion through the track was decoded from the low frequency content. Together, these results suggest that rats have simultaneous access to low- and high-frequency information about their environment, which has been shown to be parsed into different processing streams that are likely important for accurate object localization and texture coding. PMID:20164407

  2. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  3. Genetic Factors That Increase Male Facial Masculinity Decrease Facial Attractiveness of Female Relatives

    PubMed Central

    Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2014-01-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework. PMID:24379153

  4. Genetic factors that increase male facial masculinity decrease facial attractiveness of female relatives.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2014-02-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework.

  5. Apparent diffusive motion of centrin foci in living cells: implications for diffusion-based motion in centriole duplication

    NASA Astrophysics Data System (ADS)

    Rafelski, Susanne M.; Keller, Lani C.; Alberts, Jonathan B.; Marshall, Wallace F.

    2011-04-01

    The degree to which diffusion contributes to positioning cellular structures is an open question. Here we investigate the question of whether diffusive motion of centrin granules would allow them to interact with the mother centriole. The role of centrin granules in centriole duplication remains unclear, but some proposed functions of these granules, for example, in providing pre-assembled centriole subunits, or by acting as unstable 'pre-centrioles' that need to be captured by the mother centriole (La Terra et al 2005 J. Cell Biol. 168 713-22), require the centrin foci to reach the mother. To test whether diffusive motion could permit such interactions in the necessary time scale, we measured the motion of centrin-containing foci in living human U2OS cells. We found that these centrin foci display apparently diffusive undirected motion. Using the apparent diffusion constant obtained from these measurements, we calculated the time scale required for diffusion to capture by the mother centrioles and found that it would greatly exceed the time available in the cell cycle. We conclude that mechanisms invoking centrin foci capture by the mother, whether as a pre-centriole or as a source of components to support later assembly, would require a form of directed motility of centrin foci that has not yet been observed.

  6. Trajectory of coronary motion and its significance in robotic motion cancellation.

    PubMed

    Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor

    2004-05-01

    To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (<50 microm/s in the lateral plane) were observed during the P wave and the ST segment. The trajectories of the points of interest during consecutive cardiac cycles as well as during cardiac cycles minutes apart remained comparable (the differences were negligible), provided the hemodynamics remained stable. Inotrope-induced changes in cardiac contractility influenced not only the maximum excursion, but also the shape of the trajectory. Normal positive pressure ventilation displacing the heart in the thoracic cage was evident by the displacement of the reference point of the trajectory. The movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion

  7. [Facial nerve neurinomas].

    PubMed

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  8. Motion cues that make an impression☆

    PubMed Central

    Koppensteiner, Markus

    2013-01-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432

  9. Modulation of Alpha Oscillations in the Human EEG with Facial Preference

    PubMed Central

    Kang, Jae-Hwan; Kim, Su Jin; Cho, Yang Seok; Kim, Sung-Phil

    2015-01-01

    Facial preference that results from the processing of facial information plays an important role in social interactions as well as the selection of a mate, friend, candidate, or favorite actor. However, it still remains elusive which brain regions are implicated in the neural mechanisms underlying facial preference, and how neural activities in these regions are modulated during the formation of facial preference. In the present study, we investigated the modulation of electroencephalography (EEG) oscillatory power with facial preference. For the reliable assessments of facial preference, we designed a series of passive viewing and active choice tasks. In the former task, twenty-four face stimuli were passively viewed by participants for multiple times in random order. In the latter task, the same stimuli were then evaluated by participants for their facial preference judgments. In both tasks, significant differences between the preferred and non-preferred faces groups were found in alpha band power (8–13 Hz) but not in other frequency bands. The preferred faces generated more decreases in alpha power. During the passive viewing task, significant differences in alpha power between the preferred and non-preferred face groups were observed at the left frontal regions in the early (0.15–0.4 s) period during the 1-s presentation. By contrast, during the active choice task when participants consecutively watched the first and second face for 1 s and then selected the preferred one, an alpha power difference was found for the late (0.65–0.8 s) period over the whole brain during the first face presentation and over the posterior regions during the second face presentation. These results demonstrate that the modulation of alpha activity by facial preference is a top-down process, which requires additional cognitive resources to facilitate information processing of the preferred faces that capture more visual attention than the non-preferred faces. PMID:26394328

  10. Mirror neuron activation of musicians and non-musicians in response to motion captured piano performances.

    PubMed

    Hou, Jiancheng; Rajmohan, Ravi; Fang, Dan; Kashfi, Karl; Al-Khalil, Kareem; Yang, James; Westney, William; Grund, Cynthia M; O'Boyle, Michael W

    2017-07-01

    Mirror neurons (MNs) activate when performing an action and when an observer witnesses the same action performed by another individual. Functional magnetic resonance imaging (fMRI) and presentation of motion captured piano performances were used to identify differences in MN activation for musicians/non-musicians when viewing piano pieces played in a "Correct" mode (i.e., emphasis on technical correctness) or an "Enjoyment" mode (i.e., simply told to "enjoy" playing the piece). Results showed greater MN activation in a variety of brain regions for musicians, with these differences more pronounced in the "Enjoyment" mode. Our findings suggest that activation of MNs is not only initiated by the imagined action of an observed movement, but such activation is modulated by the level of musical expertise and knowledge of associated motor movements that the observer brings to the viewing situation. Enhanced MN activation in musicians may stem from imagining themselves actually playing the observed piece. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Non-invasive health status detection system using Gabor filters based on facial block texture features.

    PubMed

    Shu, Ting; Zhang, Bob

    2015-04-01

    Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.

  12. Outcome of facial physiotherapy in patients with prolonged idiopathic facial palsy.

    PubMed

    Watson, G J; Glover, S; Allen, S; Irving, R M

    2015-04-01

    This study investigated whether patients who remain symptomatic more than a year following idiopathic facial paralysis gain benefit from tailored facial physiotherapy. A two-year retrospective review was conducted of all symptomatic patients. Data collected included: age, gender, duration of symptoms, Sunnybrook facial grading system scores pre-treatment and at last visit, and duration of treatment. The study comprised 22 patients (with a mean age of 50.5 years (range, 22-75 years)) who had been symptomatic for more than a year following idiopathic facial paralysis. The mean duration of symptoms was 45 months (range, 12-240 months). The mean duration of follow up was 10.4 months (range, 2-36 months). Prior to treatment, the mean Sunnybrook facial grading system score was 59 (standard deviation = 3.5); this had increased to 83 (standard deviation = 2.7) at the last visit, with an average improvement in score of 23 (standard deviation = 2.9). This increase was significant (p < 0.001). Tailored facial therapy can improve facial grading scores in patients who remain symptomatic for prolonged periods.

  13. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  14. Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.

    PubMed

    Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen

    2017-11-23

    Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.

  15. Multi-channel orbicularis oculi stimulation to restore eye-blink function in facial paralysis.

    PubMed

    Somia, N N; Zonnevijlle, E D; Stremel, R W; Maldonado, C; Gossman, M D; Barker, J H

    2001-01-01

    Facial paralysis due to facial nerve injury results in the loss of function of the muscles of the hemiface. The most serious complication in extreme cases is the loss of vision. In this study, we compared the effectiveness of single- and multiple-channel electrical stimulation to restore a complete and cosmetically acceptable eye blink. We established bilateral orbicularis oculi muscle (OOM) paralysis in eight dogs; the OOM of one side was directly stimulated using single-channel electrical stimulation and the opposite side was stimulated using multi-channel electrical stimulation. The changes in the palpebral fissure and complete palpebral closure were measured. The difference in current intensities between the multi-channel and single-channel simulation groups was significant, while only multi-channel stimulation produced complete eyelid closure. The latest electronic stimulation circuitry with high-quality implantable electrodes will make it possible to regulate precisely OOM contractions and thus generate complete and cosmetically acceptable eye-blink motion in patients with facial paralysis. Copyright 2001 Wiley-Liss, Inc.

  16. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  17. Three-dimensional gender differences in facial form of children in the North East of England.

    PubMed

    Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross

    2013-06-01

    The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.

  18. Facial trauma.

    PubMed

    Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W

    Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.

  19. Three-dimensional comparison of facial morphology in white populations in Budapest, Hungary, and Houston, Texas.

    PubMed

    Gor, Troy; Kau, Chung How; English, Jeryl D; Lee, Robert P; Borbely, Peter

    2010-03-01

    The aim of this study was to assess the use of 3-dimensional facial averages in determining facial morphologic differences in 2 white population groups. Three-dimensional images were obtained in a reproducible and controlled environment from a commercially available stereo-photogrammetric camera capture system. The 3dMDface system (3dMD, Atlanta, Ga) photographed 200 subjects from 2 population groups (Budapest, Hungary, and Houston, Tex); each group included 50 men and 50 women, aged 18 to 30 years. Each face was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was used until an average composite face of 1 man and 1 woman was obtained for each subgroup (Hungarian men, Hungarian women, Texas men, and Texas women). These average facial composites were superimposed (men and women) based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed between the population groups. These differences could be seen in the nasal, malar, lips, and lower facial regions. In general, the mean facial differences were 0.55 +/- 0.60 mm between the Hungarian and Texas women, and 0.44 +/- 0.42 mm between the Hungarian and Texas men. The ranges of differences were -2.02 to 3.77 and -2.05 to 1.94 mm for the female and male pairings, respectively. Three-dimensional facial averages representing the facial soft-tissue morphology of adults can be used to assess diagnostic and treatment regimens for patients by population. Each population is different with respect to their soft-tissue structures, and traditional soft-tissue normative data (eg, white norms) should be altered and used for specific groups. American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  20. A video-based system for hand-driven stop-motion animation.

    PubMed

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  1. The effect of width of facial canal in patients with idiopathic peripheral facial paralysis on the development of paralysis.

    PubMed

    Eksi, Guldem; Akbay, Ercan; Bayarogullari, Hanifi; Cevik, Cengiz; Yengil, Erhan; Ozler, Gul Soylu

    2015-09-01

    The aim of this prospective study is to investigate whether the possible stenosis due to anatomic variations of labyrinthine segment (LS), tympanic segment (TS) and mastoid segment (MS) of the facial canal in the temporal bone is a predisposing factor in the development of paralysis. 22 patients with idiopathic peripheral facial paralysis (IPFP) were included in the study. Multi-slice computed tomography (MSCT) with 64 detectors was used for temporal bone imaging of the patients. Reconstruction images in axial, coronal and sagittal planes were created in workstation computers from the captured images. The diameters and lengths of LS, TS and MS of the facial canal were measured. The mean values of LD, ND and SL of LS were 1.31 ± 0.39, 0.91 ± 0.27, 4.17 ± 0.48 in patient group and 1.26 ± 0.29, 0.95 ± 0.21, 4.60 ± 1.36 in control group, respectively. The mean values of LD, ND and SL of TS were 1.11 ± 0.22, 0.90 ± 0.14, 12.63 ± 1.47 in patient group and 1.17 ± 0.23, 0.85 ± 0.24, 12.10 ± 1.79 in control group, respectively. The mean values of LD, ND and SL of MS were 1.80 ± 0.30, 1.44 ± 0.29 vs. 14.3 ± 1.90 in patient group 1.74 ± 0.38, 1.40 ± 0.29, 14.15 ± 2.16 in control group, respectively. The measurements of the parameters of all three segments in patient group and control group were similar. Similar results between patient and control group were obtained in this study investigating the effect of stenosis in facial canal in the development of IPFP.

  2. Three-dimensional photography for the evaluation of facial profiles in obstructive sleep apnoea.

    PubMed

    Lin, Shih-Wei; Sutherland, Kate; Liao, Yu-Fang; Cistulli, Peter A; Chuang, Li-Pang; Chou, Yu-Ting; Chang, Chih-Hao; Lee, Chung-Shu; Li, Li-Fu; Chen, Ning-Hung

    2018-06-01

    Craniofacial structure is an important determinant of obstructive sleep apnoea (OSA) syndrome risk. Three-dimensional stereo-photogrammetry (3dMD) is a novel technique which allows quantification of the craniofacial profile. This study compares the facial images of OSA patients captured by 3dMD to three-dimensional computed tomography (3-D CT) and two-dimensional (2-D) digital photogrammetry. Measurements were correlated with indices of OSA severity. Thirty-eight patients diagnosed with OSA were included, and digital photogrammetry, 3dMD and 3-D CT were performed. Distances, areas, angles and volumes from the images captured by three methods were analysed. Almost all measurements captured by 3dMD showed strong agreement with 3-D CT measurements. Results from 2-D digital photogrammetry showed poor agreement with 3-D CT. Mandibular width, neck perimeter size and maxillary volume measurements correlated well with the severity of OSA using all three imaging methods. Mandibular length, facial width, binocular width, neck width, cranial base triangle area, cranial base area 1 and middle cranial fossa volume correlated well with OSA severity using 3dMD and 3-D CT, but not with 2-D digital photogrammetry. 3dMD provided accurate craniofacial measurements of OSA patients, which were highly concordant with those obtained by CT, while avoiding the radiation associated with CT. © 2018 Asian Pacific Society of Respirology.

  3. Hypoglossal-facial nerve reconstruction using a Y-tube-conduit reduces aberrant synkinetic movements of the orbicularis oculi and vibrissal muscles in rats.

    PubMed

    Kaya, Yasemin; Ozsoy, Umut; Turhan, Murat; Angelov, Doychin N; Sarikcioglu, Levent

    2014-01-01

    The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis) which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex) and vibrissal (whisking) musculature. The abdominal aorta plus its bifurcation was harvested (N = 12) for Y-tube conduits. Animal groups comprised intact animals (Group 1), those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2), and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3). Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone.

  4. Hypoglossal-Facial Nerve Reconstruction Using a Y-Tube-Conduit Reduces Aberrant Synkinetic Movements of the Orbicularis Oculi and Vibrissal Muscles in Rats

    PubMed Central

    Kaya, Yasemin; Ozsoy, Umut; Turhan, Murat; Angelov, Doychin N.; Sarikcioglu, Levent

    2014-01-01

    The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis) which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex) and vibrissal (whisking) musculature. The abdominal aorta plus its bifurcation was harvested (N = 12) for Y-tube conduits. Animal groups comprised intact animals (Group 1), those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2), and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3). Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone. PMID:25574468

  5. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  6. Intact imitation of emotional facial actions in autism spectrum conditions.

    PubMed

    Press, Clare; Richardson, Daniel; Bird, Geoffrey

    2010-09-01

    It has been proposed that there is a core impairment in autism spectrum conditions (ASC) to the mirror neuron system (MNS): If observed actions cannot be mapped onto the motor commands required for performance, higher order sociocognitive functions that involve understanding another person's perspective, such as theory of mind, may be impaired. However, evidence of MNS impairment in ASC is mixed. The present study used an 'automatic imitation' paradigm to assess MNS functioning in adults with ASC and matched controls, when observing emotional facial actions. Participants performed a pre-specified angry or surprised facial action in response to observed angry or surprised facial actions, and the speed of their action was measured with motion tracking equipment. Both the ASC and control groups demonstrated automatic imitation of the facial actions, such that responding was faster when they acted with the same emotional expression that they had observed. There was no difference between the two groups in the magnitude of the effect. These findings suggest that previous apparent demonstrations of impairments to the MNS in ASC may be driven by a lack of visual attention to the stimuli or motor sequencing impairments, and therefore that there is, in fact, no MNS impairment in ASC. We discuss these findings with reference to the literature on MNS functioning and imitation in ASC, as well as theories of the role of the MNS in sociocognitive functioning in typical development. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  8. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  9. Association Among Facial Paralysis, Depression, and Quality of Life in Facial Plastic Surgery Patients

    PubMed Central

    Nellis, Jason C.; Ishii, Masaru; Byrne, Patrick J.; Boahene, Kofi D. O.; Dey, Jacob K.; Ishii, Lisa E.

    2017-01-01

    IMPORTANCE Though anecdotally linked, few studies have investigated the impact of facial paralysis on depression and quality of life (QOL). OBJECTIVE To measure the association between depression, QOL, and facial paralysis in patients seeking treatment at a facial plastic surgery clinic. DESIGN, SETTING, PARTICIPANTS Data were prospectively collected for patients with all-cause facial paralysis and control patients initially presenting to a facial plastic surgery clinic from 2013 to 2015. The control group included a heterogeneous patient population presenting to facial plastic surgery clinic for evaluation. Patients who had prior facial reanimation surgery or missing demographic and psychometric data were excluded from analysis. MAIN OUTCOMES AND MEASURES Demographics, facial paralysis etiology, facial paralysis severity (graded on the House-Brackmann scale), Beck depression inventory, and QOL scores in both groups were examined. Potential confounders, including self-reported attractiveness and mood, were collected and analyzed. Self-reported scores were measured using a 0 to 100 visual analog scale. RESULTS There was a total of 263 patients (mean age, 48.8 years; 66.9% were female) were analyzed. There were 175 control patients and 88 patients with facial paralysis. Sex distributions were not significantly different between the facial paralysis and control groups. Patients with facial paralysis had significantly higher depression, lower self-reported attractiveness, lower mood, and lower QOL scores. Overall, 37 patients with facial paralysis (42.1%) screened positive for depression, with the greatest likelihood in patients with House-Brackmann grade 3 or greater (odds ratio, 10.8; 95% CI, 5.13–22.75) compared with 13 control patients (8.1%) (P < .001). In multivariate regression, facial paralysis and female sex were significantly associated with higher depression scores (constant, 2.08 [95% CI, 0.77–3.39]; facial paralysis effect, 5.98 [95% CI, 4.38–7

  10. Alert Response to Motion Onset in the Retina

    PubMed Central

    Chen, Eric Y.; Marre, Olivier; Fisher, Clark; Schwartz, Greg; Levy, Joshua; da Silveira, Rava Azeredo

    2013-01-01

    Previous studies have shown that motion onset is very effective at capturing attention and is more salient than smooth motion. Here, we find that this salience ranking is present already in the firing rate of retinal ganglion cells. By stimulating the retina with a bar that appears, stays still, and then starts moving, we demonstrate that a subset of salamander retinal ganglion cells, fast OFF cells, responds significantly more strongly to motion onset than to smooth motion. We refer to this phenomenon as an alert response to motion onset. We develop a computational model that predicts the time-varying firing rate of ganglion cells responding to the appearance, onset, and smooth motion of a bar. This model, termed the adaptive cascade model, consists of a ganglion cell that receives input from a layer of bipolar cells, represented by individual rectified subunits. Additionally, both the bipolar and ganglion cells have separate contrast gain control mechanisms. This model captured the responses to our different motion stimuli over a wide range of contrasts, speeds, and locations. The alert response to motion onset, together with its computational model, introduces a new mechanism of sophisticated motion processing that occurs early in the visual system. PMID:23283327

  11. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  12. Facial fractures in children.

    PubMed

    Boyette, Jennings R

    2014-10-01

    Facial trauma in children differs from adults. The growing facial skeleton presents several challenges to the reconstructive surgeon. A thorough understanding of the patterns of facial growth and development is needed to form an individualized treatment strategy. A proper diagnosis must be made and treatment options weighed against the risk of causing further harm to facial development. This article focuses on the management of facial fractures in children. Discussed are common fracture patterns based on the development of the facial structure, initial management, diagnostic strategies, new concepts and old controversies regarding radiologic examinations, conservative versus operative intervention, risks of growth impairment, and resorbable fixation. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  14. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  15. Use of 3-dimensional surface acquisition to study facial morphology in 5 populations.

    PubMed

    Kau, Chung How; Richmond, Stephen; Zhurov, Alexei; Ovsenik, Maja; Tawfik, Wael; Borbely, Peter; English, Jeryl D

    2010-04-01

    The aim of this study was to assess the use of 3-dimensional facial averages for determining morphologic differences from various population groups. We recruited 473 subjects from 5 populations. Three-dimensional images of the subjects were obtained in a reproducible and controlled environment with a commercially available stereo-photogrammetric camera capture system. Minolta VI-900 (Konica Minolta, Tokyo, Japan) and 3dMDface (3dMD LLC, Atlanta, Ga) systems were used. Each image was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was performed until average composite faces of 1 man and 1 woman were achieved for each subgroup. These average facial composites were superimposed based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed among the groups. The linear differences between surface shells ranged from 0.37 to 1.00 mm for the male groups. The linear differences ranged from 0.28 and 0.87 mm for the women. The color histograms showed that the similarities in facial shells between the subgroups by sex ranged from 26.70% to 70.39% for men and 36.09% to 79.83% for women. The average linear distance from the signed color histograms for the male subgroups ranged from -6.30 to 4.44 mm. The female subgroups ranged from -6.32 to 4.25 mm. Average faces can be efficiently and effectively created from a sample of 3-dimensional faces. Average faces can be used to compare differences in facial morphologies for various populations and sexes. Facial morphologic differences were greatest when totally different ethnic variations were compared. Facial morphologic similarities were present in comparable groups, but there were large variations in concentrated areas of the face. Copyright 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  16. Chronic, burning facial pain following cosmetic facial surgery.

    PubMed

    Eisenberg, E; Yaari, A; Har-Shai, Y

    1996-01-01

    Chronic, burning facial pain as a result of cosmetic facial surgery has rarely been reported. During the year of 1994, two female patients presented themselves at our Pain Relief Clinic with chronic facial pain that developed following aesthetic facial surgery. One patient underwent bilateral transpalpebral surgery for removal of intraorbital fat for the correction of the exophthalmus, and the other had classical face and anterior hairline forehead lifts. Pain in both patients was similar in that it was bilateral, symmetric, burning in quality, and aggravated by external stimuli, mainly light touch. It was resistant to multiple analgesic medications, and was associated with significant depression and disability. Diagnostic local (lidocaine) and systemic (lidocaine and phentolamine) nerve blocks failed to provide relief. Psychological evaluation revealed that the two patients had clear psychosocial factors that seemed to have further compounded their pain complaints. Tricyclic antidepressants (and biofeedback training in one patient) were modestly effective and produced only partial pain relief.

  17. Facial morphologies of an adult Egyptian population and an adult Houstonian white population compared using 3D imaging.

    PubMed

    Seager, Dennis Craig; Kau, Chung How; English, Jeryl D; Tawfik, Wael; Bussa, Harry I; Ahmed, Abou El Yazeed M

    2009-09-01

    To compare the facial morphologies of an adult Egyptian population with those of a Houstonian white population. The three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMDface System photographed 186 subjects from two population groups (Egypt and Houston). All of the participants from both population groups were between 18 and 30 years of age and had no apparent facial anomalies. All facial images were overlaid and superimposed, and a complex mathematical algorithm was performed to generate a composite facial average (one male and one female) for each subgroup (EGY-M: Egyptian male subjects; EGY-F: Egyptian female subjects; HOU-M: Houstonian male subjects; and HOU-F: Houstonian female subjects). The computer-generated facial averages were superimposed based on a previously validated superimposition method, and the facial differences were evaluated and quantified. Distinct facial differences were evident between the subgroups evaluated, involving various regions of the face including the slant of the forehead, and the nasal, malar, and labial regions. Overall, the mean facial differences between the Egyptian and Houstonian female subjects were 1.33 +/- 0.93 mm, while the differences in Egyptian and Houstonian male subjects were 2.32 +/- 2.23 mm. The range of differences for the female population pairings and the male population pairings were 14.34 mm and 13.71 mm, respectively. The average adult Egyptian and white Houstonian face possess distinct differences. Different populations and ethnicities have different facial features and averages.

  18. Facial attractiveness.

    PubMed

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  19. Comparison of 3D Joint Angles Measured With the Kinect 2.0 Skeletal Tracker Versus a Marker-Based Motion Capture System.

    PubMed

    Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu

    2017-04-01

    The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.

  20. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  1. Management of synkinesis and asymmetry in facial nerve palsy: a review article.

    PubMed

    Pourmomeny, Abbas Ali; Asadi, Sahar

    2014-10-01

    The important sequelae of facial nerve palsy are synkinesis, asymmetry, hypertension and contracture; all of which have psychosocial effects on patients. Synkinesis due to mal regeneration causes involuntary movements during a voluntary movement. Previous studies have advocated treatment using physiotherapy modalities alone or with exercise therapy, but no consensus exists on the optimal approach. Thus, this review summarizes clinical controlled studies in the management of synkinesis and asymmetry in facial nerve palsy. Case-controlled clinical studies of patients at the acute stage of injury were selected for this review article. Data were obtained from English-language databases from 1980 until mid-2013. Among 124 articles initially captured, six randomized controlled trials involving 269 patients were identified with appropriate inclusion criteria. The results of all these studies emphasized the benefit of exercise therapy. Four studies considered electromyogram (EMG) biofeedback to be effective through neuromuscular re-education. Synkinesis and inconsistency of facial muscles could be treated with educational exercise therapy. EMG biofeedback is a suitable tool for this exercise therapy.

  2. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness.

    PubMed

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness.

  3. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness

    PubMed Central

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness. PMID:27148111

  4. The expression of a motoneuron-specific serine protease, motopsin (PRSS12), after facial nerve axotomy in mice.

    PubMed

    Numajiri, Toshiaki; Mitsui, Shinichi; Hisa, Yasuo; Ishida, Toshihiro; Nishino, Kenichi; Yamaguchi, Nozomi

    2006-01-01

    Motopsin (PRSS12) is a mosaic serine protease that is preferentially expressed in motor neurons. To study the relationship between motopsin and motoneuron function, we investigated the expression of motopsin mRNA in facial nerve nuclei after facial nerve axotomy at the anterior margin of the parotid gland in mice. Neuronal function was monitored by assessing vibrissal motion in 3 months. Vibrissal behaviour on the injured side disappeared until the day 14 post-operation, and then recovered between the day 21 and 35. Motopsin expression decreased at the day 14, but markedly recovered by the day 21. In contrast, expression of growth-associated protein-43 (GAP-43) was induced at the day 3. These results suggest that the recovery of motopsin expression is correlated with the recovery of the facial motor neuronal function.

  5. Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.

    PubMed

    Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael

    2006-08-01

    The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.

  6. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  7. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study

    PubMed Central

    Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-01-01

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211

  8. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.

    PubMed

    Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-10-25

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.

  9. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    PubMed

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  10. [Peripheral facial nerve lesion induced long-term dendritic retraction in pyramidal cortico-facial neurons].

    PubMed

    Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta

    2011-01-01

    Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.

  11. Cholinergic modulation of stimulus-driven attentional capture.

    PubMed

    Boucart, Muriel; Michael, George Andrew; Bubicco, Giovanna; Ponchel, Amelie; Waucquier, Nawal; Deplanque, Dominique; Deguil, Julie; Bordet, Régis

    2015-04-15

    Distraction is one of the main problems encountered by people with degenerative diseases that are associated with reduced cortical cholinergic innervations. We examined the effects of donepezil, a cholinesterase inhibitor, on stimulus-driven attentional capture. Reflexive attention shifts to a distractor are usually elicited by abrupt peripheral changes. This bottom-up shift of attention to a salient item is thought to be the result of relatively inflexible hardwired mechanisms. Thirty young male participants were randomly allocated to one of two groups: placebo first/donepezil second session or the opposite. They were asked to locate a target appearing above and below fixation whilst a peripheral distractor moved abruptly (motion-jitter attentional capture condition) or not (baseline condition). A classical attentional capture effect was observed under placebo: moving distractors interfered with the task in slowing down response times as compared to the baseline condition with fixed distractors. Increased interference from moving distractors was found under donepezil. We suggest that attentional capture in our paradigm likely involved low level mechanisms such as automatic reflexive orienting. Peripheral motion-jitter elicited a rapid reflexive orienting response initiated by a cholinergic signal from the brainstem pedunculo-pontine nucleus that activates nicotinic receptors in the superior colliculus. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  13. Effects of a small talking facial image on autonomic activity: the moderating influence of dispositional BIS and BAS sensitivities and emotions.

    PubMed

    Ravaja, Niklas

    2004-01-01

    We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.

  14. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    PubMed

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  15. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  16. Caricaturing facial expressions.

    PubMed

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  17. The KIT Motion-Language Dataset.

    PubMed

    Plappert, Matthias; Mandery, Christian; Asfour, Tamim

    2016-12-01

    Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.

  18. Americans and Palestinians judge spontaneous facial expressions of emotion.

    PubMed

    Kayyal, Mary H; Russell, James A

    2013-10-01

    The claim that certain emotions are universally recognized from facial expressions is based primarily on the study of expressions that were posed. The current study was of spontaneous facial expressions shown by aborigines in Papua New Guinea (Ekman, 1980); 17 faces claimed to convey one (or, in the case of blends, two) basic emotions and five faces claimed to show other universal feelings. For each face, participants rated the degree to which each of the 12 predicted emotions or feelings was conveyed. The modal choice for English-speaking Americans (n = 60), English-speaking Palestinians (n = 60), and Arabic-speaking Palestinians (n = 44) was the predicted label for only 4, 5, and 4, respectively, of the 17 faces for basic emotions, and for only 2, 2, and 2, respectively, of the 5 faces for other feelings. Observers endorsed the predicted emotion or feeling moderately often (65%, 55%, and 44%), but also denied it moderately often (35%, 45%, and 56%). They also endorsed more than one (or, for blends, two) label(s) in each face-on average, 2.3, 2.3, and 1.5 of basic emotions and 2.6, 2.2, and 1.5 of other feelings. There were both similarities and differences across culture and language, but the emotional meaning of a facial expression is not well captured by the predicted label(s) or, indeed, by any single label.

  19. The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.

    PubMed

    Hachaj, Tomasz; Ogiela, Marek R

    2016-06-01

    The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment.

  20. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  1. The interaction between embodiment and empathy in facial expression recognition

    PubMed Central

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2018-01-01

    Abstract Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation. PMID:29378022

  2. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.

    PubMed

    Yitzhak, Neta; Giladi, Nir; Gurevich, Tanya; Messinger, Daniel S; Prince, Emily B; Martin, Katherine; Aviezer, Hillel

    2017-12-01

    According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Facial reanimation by muscle-nerve neurotization after facial nerve sacrifice. Case report.

    PubMed

    Taupin, A; Labbé, D; Babin, E; Fromager, G

    2016-12-01

    Recovering a certain degree of mimicry after sacrifice of the facial nerve is a clinically recognized finding. The authors report a case of hemifacial reanimation suggesting a phenomenon of neurotization from muscle-to-nerve. A woman benefited from a parotidectomy with sacrifice of the left facial nerve indicated for recurrent tumor in the gland. The distal branches of the facial nerve, isolated at the time of resection, were buried in the masseter muscle underneath. The patient recovered a voluntary hémifacial motricity. The electromyographic analysis of the motor activity of the zygomaticus major before and after block of the masseter nerve showed a dependence between mimic muscles and the masseter muscle. Several hypotheses have been advanced to explain the spontaneous reanimation of facial paralysis. The clinical case makes it possible to argue in favor of muscle-to-nerve neurotization from masseter muscle to distal branches of the facial nerve. It illustrates the quality of motricity that can be obtained thanks to this procedure. The authors describe a simple implantation technique of distal branches of the facial nerve in the masseter muscle during a radical parotidectomy with facial nerve sacrifice and recovery of resting tone but also a quality voluntary mimicry. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  4. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Fan, Rukun; Geng, Weidong

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less

  5. Toward an affordable and user-friendly visual motion capture system.

    PubMed

    Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G

    2014-01-01

    The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).

  6. Changing perception: facial reanimation surgery improves attractiveness and decreases negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E

    2014-01-01

    Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications.

    PubMed

    Parks, Connie L; Monson, Keith L

    2017-04-01

    The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

  8. External facial features modify the representation of internal facial features in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2010-08-15

    Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Satellite capture as a restricted 2 + 2 body problem

    NASA Astrophysics Data System (ADS)

    Kanaan, Wafaa; Farrelly, David; Lanchares, Víctor

    2018-04-01

    A restricted 2 + 2 body problem is proposed as a possible mechanism to explain the capture of small bodies by a planet. In particular, we consider two primaries revolving in a circular mutual orbit and two small bodies of equal mass, neither of which affects the motion of the primaries. If the small bodies are temporarily captured in the Hill sphere of the smaller primary, they may get close enough to each other to exchange energy in such a way that one of them becomes permanently captured. Numerical simulations show that capture is possible for both prograde and retrograde orbits.

  10. [Using infrared thermal asymmetry analysis for objective assessment of the lesion of facial nerve function].

    PubMed

    Liu, Xu-long; Hong, Wen-xue; Song, Jia-lin; Wu, Zhen-ying

    2012-03-01

    The skin temperature distribution of a healthy human body exhibits a contralateral symmetry. Some lesions of facial nerve function are associated with an alteration of the thermal distribution of the human body. Since the dissipation of heat through the skin occurs for the most part in the form of infrared radiation, infrared thermography is the method of choice to capture the alteration of the infrared thermal distribution. This paper presents a new method of analysis of the thermal asymmetry named effective thermal area ratio, which is a product of two variables. The first variable is mean temperature difference between the specific facial region and its contralateral region. The second variable is a ratio, which is equal to the area of the abnormal region divided by the total area. Using this new method, we performed a controlled trial to assess the facial nerve function of the healthy subjects and the patients with Bell's palsy respectively. The results show: that the mean specificity and sensitivity of this method are 0.90 and 0.87 respectively, improved by 7% and 26% compared with conventional methods. Spearman correlation coefficient between effective thermal area ratio and the degree of facial nerve function is an average of 0.664. Hence, concerning the diagnosis and assessment of facial nerve function, infrared thermography is a powerful tool; while the effective ther mal area ratio is an efficient clinical indicator.

  11. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  12. A System for Delivering Mechanical Stimulation and Robot-Assisted Therapy to the Rat Whisker Pad during Facial Nerve Regeneration

    PubMed Central

    Heaton, James T.; Knox, Christopher; Malo, Juan; Kobler, James B.; Hadlock, Tessa A.

    2013-01-01

    Functional recovery is typically poor after facial nerve transection and surgical repair. In rats, whisking amplitude remains greatly diminished after facial nerve regeneration, but can recover more completely if the whiskers are periodically mechanically stimulated during recovery. Here we present a robotic “whisk assist” system for mechanically driving whisker movement after facial nerve injury. Movement patterns were either pre-programmed to reflect natural amplitudes and frequencies, or movements of the contralateral (healthy) side of the face were detected and used to control real-time mirror-like motion on the denervated side. In a pilot study, twenty rats were divided into nine groups and administered one of eight different whisk assist driving patterns (or control) for 5–20 minutes, five days per week, across eight weeks of recovery after unilateral facial nerve cut and suture repair. All rats tolerated the mechanical stimulation well. Seven of the eight treatment groups recovered average whisking amplitudes that exceeded controls, although small group sizes precluded statistical confirmation of group differences. The potential to substantially improve facial nerve recovery through mechanical stimulation has important clinical implications, and we have developed a system to control the pattern and dose of stimulation in the rat facial nerve model. PMID:23475376

  13. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  14. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  15. [The application of facial liposuction and fat grafting in the remodeling of facial contour].

    PubMed

    Wen, Huicai; Ma, Li; Sui, Ynnpeng; Jian, Xueping

    2015-03-01

    To investigate the application of facial liposuction and fat grafting in the remodeling of facial contour. From Nov. 2008 to Mar. 2014, 49 cases received facial liposuction and fat grafting to improve facial contours. Subcutaneous facial liposuction with tumescent technique and chin fat grafting were performed in all the cases, buccal fat pad excision of fat in 7 cases, the masseter injection of botulinum toxin type A in 9 cases, temporal fat grafting in 25 cases, forehead fat grafting in 15 cases. Marked improvement was achieved in all the patients with stable results during the follow-up period of 6 - 24 months. Complications, such as asymmetric, unsmooth and sagging were retreated with acceptance results. Combination application of liposuction and fat grafting can effectively and easily improve the facial contour with low risk.

  16. Facial Orientation and Facial Shape in Extant Great Apes: A Geometric Morphometric Analysis of Covariation

    PubMed Central

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232

  17. Facial diplegia: a clinical dilemma.

    PubMed

    Chakrabarti, Debaprasad; Roy, Mukut; Bhattacharyya, Amrit K

    2013-06-01

    Bilateral facial paralysis is a rare clinical entity and presents as a diagnostic challenge. Unlike its unilateral counterpart facial diplegia is seldom secondary to Bell's palsy. Occurring at a frequency of 0.3% to 2% of all facial palsies it often indicates ominous medical conditions. Guillian-Barre syndrome needs to be considered as a differential in all given cases of facial diplegia where timely treatment would be rewarding. Here a case of bilateral facial palsy due to Guillian-Barre syndrome with atypical presentation is reported.

  18. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile.

    PubMed

    Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J

    2018-05-01

    To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.

  19. Hypoglossal-facial nerve "side"-to-side neurorrhaphy for facial paralysis resulting from closed temporal bone fractures.

    PubMed

    Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song

    2018-06-06

    Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal

  20. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  1. Capturing User Reading Behaviors for Personalized Document Summarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  2. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  3. L-Eye to Me: The Combined Role of Need for Cognition and Facial Trustworthiness in Mimetic Desires

    ERIC Educational Resources Information Center

    Treinen, Evelyne; Corneille, Olivier; Luypaert, Gaylord

    2012-01-01

    Recent studies showed that stimuli are evaluated more favourably when they are perceived to capture others' attention, an effect coined "mimetic desire". The aim of the present research was to examine the combined role of Need for Cognition and target's facial trustworthiness in this effect. Participants saw movie excerpts of trustworthy and…

  4. Large Intratemporal Facial Nerve Schwannoma without Facial Palsy: Surgical Strategy of Tumor Removal and Functional Reconstruction.

    PubMed

    Yetiser, Sertac

    2018-06-08

     Three patients with large intratemporal facial schwannomas underwent tumor removal and facial nerve reconstruction with hypoglossal anastomosis. The surgical strategy for the cases was tailored to the location of the mass and its extension along the facial nerve.  To provide data on the different clinical aspects of facial nerve schwannoma, the appropriate planning for management, and the predictive outcomes of facial function.  Three patients with facial schwannomas (two men and one woman, ages 45, 36, and 52 years, respectively) who presented to the clinic between 2009 and 2015 were reviewed. They all had hearing loss but normal facial function. All patients were operated on with radical tumor removal via mastoidectomy and subtotal petrosectomy and simultaneous cranial nerve (CN) 7- CN 12 anastomosis.  Multiple segments of the facial nerve were involved ranging in size from 3 to 7 cm. In the follow-up period of 9 to 24 months, there was no tumor recurrence. Facial function was scored House-Brackmann grades II and III, but two patients are still in the process of functional recovery.  Conservative treatment with sparing of the nerve is considered in patients with small tumors. Excision of a large facial schwannoma with immediate hypoglossal nerve grafting as a primary procedure can provide satisfactory facial nerve function. One of the disadvantages of performing anastomosis is that there is not enough neural tissue just before the bifurcation of the main stump to provide neural suturing without tension because middle fossa extension of the facial schwannoma frequently involves the main facial nerve at the stylomastoid foramen. Reanimation should be processed with extensive backward mobilization of the hypoglossal nerve. Georg Thieme Verlag KG Stuttgart · New York.

  5. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  6. Two-character motion analysis and synthesis.

    PubMed

    Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong

    2008-01-01

    In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.

  7. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  8. How components of facial width to height ratio differently contribute to the perception of social traits

    PubMed Central

    Lio, Guillaume; Gomez, Alice; Sirigu, Angela

    2017-01-01

    Facial width to height ratio (fWHR) is a morphological cue that correlates with sexual dimorphism and social traits. Currently, it is unclear how vertical and horizontal components of fWHR, distinctly capture faces’ social information. Using a new methodology, we orthogonally manipulated the upper facial height and the bizygomatic width to test their selective effect in the formation of impressions. Subjects (n = 90) saw pair of faces and had to select the face expressing better different social traits (trustworthiness, aggressiveness and femininity). We further investigated how sex and fWHR components interact in the formation of these judgements. Across experiments, changes along the vertical component better predicted participants' ratings rather than the horizontal component. Faces with smaller height were perceived as less trustworthy, less feminine and more aggressive. By dissociating fWHR and testing the contribution of its components independently, we obtained a powerful and discriminative measure of how facial morphology guides social judgements. PMID:28235081

  9. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  10. Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.

    PubMed

    Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan

    2017-08-01

    Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p < 0.05). Smile evaluation revealed no significant change of oral commissure excursion. The mean Glasgow Benefit Inventory score indicated substantial improvement in patients' overall quality of life. If a primary facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.

  11. Mime therapy improves facial symmetry in people with long-term facial nerve paresis: a randomised controlled trial.

    PubMed

    Beurskens, Carien H G; Heymans, Peter G

    2006-01-01

    What is the effect of mime therapy on facial symmetry and severity of paresis in people with facial nerve paresis? Randomised controlled trial. 50 people recruited from the Outpatient department of two metropolitan hospitals with facial nerve paresis for more than nine months. The experimental group received three months of mime therapy consisting of massage, relaxation, inhibition of synkinesis, and co-ordination and emotional expression exercises. The control group was placed on a waiting list. Assessments were made on admission to the trial and three months later by a measurer blinded to group allocation. Facial symmetry was measured using the Sunnybrook Facial Grading System. Severity of paresis was measured using the House-Brackmann Facial Grading System. After three months of mime therapy, the experimental group had improved their facial symmetry by 20.4 points (95% CI 10.4 to 30.4) on the Sunnybrook Facial Grading System compared with the control group. In addition, the experimental group had reduced the severity of their paresis by 0.6 grade (95% CI 0.1 to 1.1) on the House-Brackmann Facial Grading System compared with the control group. These effects were independent of age, sex, and duration of paresis. Mime therapy improves facial symmetry and reduces the severity of paresis in people with facial nerve paresis.

  12. Guide to Understanding Facial Palsy

    MedlinePlus

    ... to many different facial muscles. These muscles control facial expression. The coordinated activity of this nerve and these ... involves a weakness of the muscles responsible for facial expression and side-to-side eye movement. Moebius syndrome ...

  13. Emotional facial and vocal expressions during story retelling by children and adolescents with high-functioning autism.

    PubMed

    Grossman, Ruth B; Edelson, Lisa R; Tager-Flusberg, Helen

    2013-06-01

    People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Participants were 22 male children and adolescents with HFA and 18 typically developing (TD) controls (17 males, 1 female). The authors used a story retelling task to elicit emotionally laden narratives, which were analyzed through the use of acoustic measures and perceptual codes. Naïve listeners coded all productions for emotion type, degree of expressiveness, and awkwardness. The group with HFA was not significantly different in accuracy or expressiveness of facial productions, but was significantly more awkward than the TD group. Participants with HFA were significantly more expressive in their vocal productions, with a trend for greater awkwardness. Severity of social communication impairment, as captured by the Autism Diagnostic Observation Schedule (ADOS; Lord, Rutter, DiLavore, & Risi, 1999), was correlated with greater vocal and facial awkwardness. Facial and vocal expressions of participants with HFA were as recognizable as those of their TD peers but were qualitatively different, particularly when listeners coded samples with intact dynamic properties. These preliminary data show qualitative differences in nonverbal communication that may have significant negative impact on the social communication success of children and adolescents with HFA.

  14. Managing the Pediatric Facial Fracture

    PubMed Central

    Cole, Patrick; Kaufman, Yoav; Hollier, Larry H.

    2009-01-01

    Facial fracture management is often complex and demanding, particularly within the pediatric population. Although facial fractures in this group are uncommon relative to their incidence in adult counterparts, a thorough understanding of issues relevant to pediatric facial fracture management is critical to optimal long-term success. Here, we discuss several issues germane to pediatric facial fractures and review significant factors in their evaluation, diagnosis, and management. PMID:22110800

  15. Computational simulation of extravehicular activity dynamics during a satellite capture attempt.

    PubMed

    Schaffner, G; Newman, D J; Robinson, S K

    2000-01-01

    A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.

  16. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  17. Neural correlates of the perception of dynamic versus static facial expressions of emotion.

    PubMed

    Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit

    2011-04-20

    This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.

  18. [Facial paralysis in children].

    PubMed

    Muler, H; Paquelin, F; Cotin, G; Luboinski, B; Henin, J M

    1975-01-01

    Facial paralyses in children may be grouped under headings displaying a certain amount of individuality. Chronologically, first to be described are neonatal facial paralyses. These are common and are nearly always cured within a few days. Some of these cases are due to the mastoid being crushed at birth with or without the use of forceps. The intra-osseous pathway of the facial nerve is then affected throughout its length. However, a cure is often spontaneous. When this desirable development does not take place within three months, the nerve should be freed by decompressive surgery. The special anatomy of the facial nerve in the new-born baby makes this a delicate operation. Later, in all stages of acute otitis, acute mastoiditis or chronic otitis, facial paralysis can be seen. Treatment depends on the stage reached by the otitis: paracentesis, mastoidectomy, various scraping procedures, and, of course, antibiotherapy. The other causes of facial paralysis in children are very much less common: a frigore or viral, traumatic, occur ring in the course of acute poliomyelitis, shingles or tumours of the middle ear. To these must be added exceptional causes such as vitamin D intoxication, idiopathic hypercalcaemia and certain haemopathies.

  19. [Facial tics and spasms].

    PubMed

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  20. Outcome of different facial nerve reconstruction techniques.

    PubMed

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  2. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  3. Facial transplantation for massive traumatic injuries.

    PubMed

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Quality of life assessment in facial palsy: validation of the Dutch Facial Clinimetric Evaluation Scale.

    PubMed

    Kleiss, Ingrid J; Beurskens, Carien H G; Stalmeier, Peep F M; Ingels, Koen J A O; Marres, Henri A M

    2015-08-01

    This study aimed at validating an existing health-related quality of life questionnaire for patients with facial palsy for implementation in the Dutch language and culture. The Facial Clinimetric Evaluation Scale was translated into the Dutch language using a forward-backward translation method. A pilot test with the translated questionnaire was performed in 10 patients with facial palsy and 10 normal subjects. Finally, cross-cultural adaption was accomplished at our outpatient clinic for facial palsy. Analyses for internal consistency, test-retest reliability, construct validity and responsiveness were performed. Ninety-three patients completed the Dutch Facial Clinimetric Evaluation Scale, the Dutch Facial Disability Index, and the Dutch Short Form (36) Health Survey. Cronbach's α, representing internal consistency, was 0.800. Test-retest reliability was shown by an intraclass correlation coefficient of 0.737. Correlations with the House-Brackmann score, Sunnybrook score, Facial Disability Index physical function, and social/well-being function were -0.292, 0.570, 0.713, and 0.575, respectively. The SF-36 domains correlate best with the FaCE social function domain, with the strongest correlation between the both social function domains (r = 0.576). The FaCE score did statistically significantly increase in 35 patients receiving botulinum toxin type A (P = 0.042, Student t test). The domains 'facial comfort' and 'social function' improved statistically significantly as well (P = 0.022 and P = 0.046, respectively, Student t-test). The Dutch Facial Clinimetric Evaluation Scale shows good psychometric values and can be implemented in the management of Dutch-speaking patients with facial palsy in the Netherlands. Translation of the instrument into other languages may lead to widespread use, making evaluation and comparison possible among different providers.

  5. Facial nerve conduction after sclerotherapy in children with facial lymphatic malformations: report of two cases.

    PubMed

    Lin, Pei-Jung; Guo, Yuh-Cherng; Lin, Jan-You; Chang, Yu-Tang

    2007-04-01

    Surgical excision is thought to be the standard treatment of choice for lymphatic malformations. However, when the lesions are limited to the face only, surgical scar and facial nerve injury may impair cosmetics and facial expression. Sclerotherapy, an injection of a sclerosing agent directly through the skin into a lesion, is an alternative method. By evaluating facial nerve conduction, we observed the long-term effect of facial lymphatic malformations after intralesional injection of OK-432 and correlated the findings with anatomic outcomes. One 12-year-old boy with a lesion over the right-side preauricular area adjacent to the main trunk of facial nerve and the other 5-year-old boy with a lesion in the left-sided cheek involving the buccinator muscle were enrolled. The follow-up data of more than one year, including clinical appearance, computed tomography (CT) scan and facial nerve evaluation were collected. The facial nerve conduction study was normal in both cases. Blink reflex in both children revealed normal results as well. Complete resolution was noted on outward appearance and CT scan. The neurophysiologic data were compatible with good anatomic and functional outcomes. Our report suggests that the inflammatory reaction of OK-432 did not interfere with adjacent facial nerve conduction.

  6. Selective attention to a facial feature with and without facial context: an ERP-study.

    PubMed

    Wijers, A A; Van Besouw, N J P; Mulder, G

    2002-04-01

    The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.

  7. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    PubMed Central

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach

  8. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    PubMed

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of

  9. Power estimation of martial arts movement using 3D motion capture camera

    NASA Astrophysics Data System (ADS)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the

  10. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    PubMed Central

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397

  11. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    PubMed

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  12. Advances in facial reanimation.

    PubMed

    Tate, James R; Tollefson, Travis T

    2006-08-01

    Facial paralysis often has a significant emotional impact on patients. Along with the myriad of new surgical techniques in managing facial paralysis comes the challenge of selecting the most effective procedure for the patient. This review delineates common surgical techniques and reviews state-of-the-art techniques. The options for dynamic reanimation of the paralyzed face must be examined in the context of several patient factors, including age, overall health, and patient desires. The best functional results are obtained with direct facial nerve anastomosis and interpositional nerve grafts. In long-standing facial paralysis, temporalis muscle transfer gives a dependable and quick result. Microvascular free tissue transfer is a reliable technique with reanimation potential whose results continue to improve as microsurgical expertise increases. Postoperative results can be improved with ancillary soft tissue procedures, as well as botulinum toxin. The paper provides an overview of recent advances in facial reanimation, including preoperative assessment, surgical reconstruction options, and postoperative management.

  13. Facial paralysis for the plastic surgeon.

    PubMed

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.

  14. Facial paralysis for the plastic surgeon

    PubMed Central

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190

  15. Augmentation of linear facial anthropometrics through modern morphometrics: a facial convexity example.

    PubMed

    Wei, R; Claes, P; Walters, M; Wholley, C; Clement, J G

    2011-06-01

    The facial region has traditionally been quantified using linear anthropometrics. These are well established in dentistry, but require expertise to be used effectively. The aim of this study was to augment the utility of linear anthropometrics by applying them in conjunction with modern 3-D morphometrics. Facial images of 75 males and 94 females aged 18-25 years with self-reported Caucasian ancestry were used. An anthropometric mask was applied to establish corresponding quasi-landmarks on the images in the dataset. A statistical face-space, encoding shape covariation, was established. The facial median plane was extracted facilitating both manual and automated indication of commonly used midline landmarks. From both indications, facial convexity angles were calculated and compared. The angles were related to the face-space using a regression based pathway enabling the visualization of facial form associated with convexity variation. Good agreement between the manual and automated angles was found (Pearson correlation: 0.9478-0.9474, Dahlberg root mean squared error: 1.15°-1.24°). The population mean angle was 166.59°-166.29° (SD 5.09°-5.2°) for males-females. The angle-pathway provided valuable feedback. Linear facial anthropometrics can be extended when used in combination with a face-space derived from 3-D scans and the exploration of property pathways inferred in a statistically verifiable way. © 2011 Australian Dental Association.

  16. Motion dazzle and camouflage as distinct anti-predator defenses.

    PubMed

    Stevens, Martin; Searle, W Tom L; Seymour, Jenny E; Marshall, Kate L A; Ruxton, Graeme D

    2011-11-25

    Camouflage patterns that hinder detection and/or recognition by antagonists are widely studied in both human and animal contexts. Patterns of contrasting stripes that purportedly degrade an observer's ability to judge the speed and direction of moving prey ('motion dazzle') are, however, rarely investigated. This is despite motion dazzle having been fundamental to the appearance of warships in both world wars and often postulated as the selective agent leading to repeated patterns on many animals (such as zebra and many fish, snake, and invertebrate species). Such patterns often appear conspicuous, suggesting that protection while moving by motion dazzle might impair camouflage when stationary. However, the relationship between motion dazzle and camouflage is unclear because disruptive camouflage relies on high-contrast markings. In this study, we used a computer game with human subjects detecting and capturing either moving or stationary targets with different patterns, in order to provide the first empirical exploration of the interaction of these two protective coloration mechanisms. Moving targets with stripes were caught significantly less often and missed more often than targets with camouflage patterns. However, when stationary, targets with camouflage markings were captured less often and caused more false detections than those with striped patterns, which were readily detected. Our study provides the clearest evidence to date that some patterns inhibit the capture of moving targets, but that camouflage and motion dazzle are not complementary strategies. Therefore, the specific coloration that evolves in animals will depend on how the life history and ontogeny of each species influence the trade-off between the costs and benefits of motion dazzle and camouflage.

  17. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    PubMed

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  18. Marquardt’s Facial Golden Decagon Mask and Its Fitness with South Indian Facial Traits

    PubMed Central

    Gandikota, Chandra Sekhar; Yadagiri, Poornima K; Manne, Ranjit; Juvvadi, Shubhaker Rao; Farah, Tamkeen; Vattipelli, Shilpa; Gumbelli, Sangeetha

    2016-01-01

    Introduction The mathematical ratio of 1:1.618 which is famously known as golden ratio seems to appear recurrently in beautiful things in nature as well as in other things that are seen as beautiful. Dr. Marquardt developed a facial golden mask that contains and includes all of the one-dimensional and two-dimensional geometric golden elements formed from the golden ratio and he claimed that beauty is universal, beautiful faces conforms to the facial golden mask regardless of sex and race. Aim The purpose of this study was to evaluate the goodness of fit of the golden facial mask with the South Indian facial traits. Materials and Methods A total of 150 subjects (75 males & 75 females) with attractive faces were selected with cephalometric orthodontic standards of a skeletal class I relation. The facial aesthetics was confirmed by the aesthetic evaluation of the frontal photographs of the subjects by a panel of ten evaluators including five orthodontists and five maxillofacial surgeons. The well-proportioned photographs were superimposed with the Golden mask along the reference lines, to evaluate the goodness of fit. Results South Indian males and females invariably show a wider inter-zygomatic and inter-gonial width than the golden mask. Most of the South Indian females and males show decreased mid-facial height compared to the golden mask, while the total facial height is more or less equal to the golden mask. Conclusion Ethnic or individual discrepancies cannot be totally ignored as in our study the mask did not fit exactly with the South Indian facial traits but, the beauty ratios came closer to those of the mask. To overcome this difficulty, there is a need to develop variants of golden facial mask for different ethnic groups. PMID:27190951

  19. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  20. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  1. Facial Soft Tissue Trauma

    PubMed Central

    Kretlow, James D.; McKnight, Aisha J.; Izaddoost, Shayan A.

    2010-01-01

    Traumatic facial soft tissue injuries are commonly encountered in the emergency department by plastic surgeons and other providers. Although rarely life-threatening, the treatment of these injuries can be complex and may have significant impact on the patient's facial function and aesthetics. This article provides a review of the relevant literature related to this topic and describes the authors' approach to the evaluation and management of the patient with facial soft tissue injuries. PMID:22550459

  2. Quantitative Magnetic Resonance Imaging Volumetry of Facial Muscles in Healthy Patients with Facial Palsy

    PubMed Central

    Volk, Gerd F.; Karamyan, Inna; Klingner, Carsten M.; Reichenbach, Jürgen R.

    2014-01-01

    Background: Magnetic resonance imaging (MRI) has not yet been established systematically to detect structural muscular changes after facial nerve lesion. The purpose of this pilot study was to investigate quantitative assessment of MRI muscle volume data for facial muscles. Methods: Ten healthy subjects and 5 patients with facial palsy were recruited. Using manual or semiautomatic segmentation of 3T MRI, volume measurements were performed for the frontal, procerus, risorius, corrugator supercilii, orbicularis oculi, nasalis, zygomaticus major, zygomaticus minor, levator labii superioris, orbicularis oris, depressor anguli oris, depressor labii inferioris, and mentalis, as well as for the masseter and temporalis as masticatory muscles for control. Results: All muscles except the frontal (identification in 4/10 volunteers), procerus (4/10), risorius (6/10), and zygomaticus minor (8/10) were identified in all volunteers. Sex or age effects were not seen (all P > 0.05). There was no facial asymmetry with exception of the zygomaticus major (larger on the left side; P = 0.012). The exploratory examination of 5 patients revealed considerably smaller muscle volumes on the palsy side 2 months after facial injury. One patient with chronic palsy showed substantial muscle volume decrease, which also occurred in another patient with incomplete chronic palsy restricted to the involved facial area. Facial nerve reconstruction led to mixed results of decreased but also increased muscle volumes on the palsy side compared with the healthy side. Conclusions: First systematic quantitative MRI volume measures of 5 different clinical presentations of facial paralysis are provided. PMID:25289366

  3. Motion of the Ca2+-pump captured.

    PubMed

    Yokokawa, Masatoshi; Takeyasu, Kunio

    2011-09-01

    Studies of ion pumps, such as ATP synthetase and Ca(2+)-ATPase, have a long history. The crystal structures of several kinds of ion pump have been resolved, and provide static pictures of mechanisms of ion transport. In this study, using fast-scanning atomic force microscopy, we have visualized conformational changes in the sarcoplasmic reticulum Ca(2+)-ATPase (SERCA) in real time at the single-molecule level. The analyses of individual SERCA molecules in the presence of both ATP and free Ca(2+) revealed up-down structural changes corresponding to the Albers-Post scheme. This fluctuation was strongly affected by the ATP and Ca(2+) concentrations, and was prevented by an inhibitor, thapsigargin. Interestingly, at a physiological ATP concentrations, the up-down motion disappeared completely. These results indicate that SERCA does not transit through the shortest structure, and has a catalytic pathway different from the ordinary Albers-Post scheme under physiological conditions. © 2011 The Authors Journal compilation © 2011 FEBS.

  4. The effects of a daily facial lotion containing vitamins B3 and E and provitamin B5 on the facial skin of Indian women: a randomized, double-blind trial.

    PubMed

    Jerajani, Hemangi R; Mizoguchi, Haruko; Li, James; Whittenbarger, Debora J; Marmor, Michael J

    2010-01-01

    The B vitamins niacinamide and panthenol have been shown to reduce many signs of skin aging, including hyperpigmentation and redness. To measure the facial skin effects in Indian women of the daily use of a lotion containing niacinamide, panthenol, and tocopherol acetate using quantitative image analysis. Adult women 30-60 years of age with epidermal hyperpigmentation were recruited in Mumbai and randomly assigned to apply a test or control lotion to the face daily for 10 weeks. Effects on skin tone were measured using an image capturing system and associated software. Skin texture was assessed by expert graders. Barrier function was evaluated by transepithelial water loss measurements. Subjects and evaluators were blinded to the product assignment. Of 246 women randomized to treatment, 207 (84%) completed the study. Women who used the test lotion experienced significantly reduced appearance of hyperpigmentation, improved skin tone evenness, appearance of lightening of skin, and positive effects on skin texture. Improvements versus control were seen as early as 6 weeks. The test lotion was well tolerated. The most common adverse event was a transient, mild burning sensation. Daily use of a facial lotion containing niacinamide, panthenol, and tocopheryl acetate improved skin tone and texture and was well tolerated in Indian women with facial signs of aging.

  5. The relationship between action-effect monitoring and attention capture.

    PubMed

    Kumar, Neeraj; Manjaly, Jaison A; Sunny, Meera Mary

    2015-02-01

    Many recent findings suggest that stimuli that are perceived to be the consequence of one's own actions are processed with priority. According to the preactivation account of intentional binding, predicted consequences are preactivated and hence receive a temporal advantage in processing. The implications of the preactivation account are important for theories of attention capture, as temporal advantage often translates to attention capture. Hence, action might modulate attention capture by feature singletons. Experiment 1 showed that a motion onset and color change captured attention only when it was preceded by an action. Experiment 2 showed that the capture occurs only with predictable, but not with unpredictable, consequences of action. Experiment 3 showed that even when half the display changed color at display transition, they were all prioritized. The results suggest that action modulates attentional control.

  6. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  7. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  8. Management of Chronic Facial Pain

    PubMed Central

    Williams, Christopher G.; Dellon, A. Lee; Rosson, Gedge D.

    2009-01-01

    Pain persisting for at least 6 months is defined as chronic. Chronic facial pain conditions often take on lives of their own deleteriously changing the lives of the sufferer. Although much is known about facial pain, it is clear that those physicians who treat these conditions should continue elucidating the mechanisms and defining successful treatment strategies for these life-changing conditions. This article will review many of the classic causes of chronic facial pain due to the trigeminal nerve and its branches that are amenable to surgical therapies. Testing of facial sensibility is described and its utility introduced. We will also introduce some of the current hypotheses of atypical facial pain and headaches secondary to chronic nerve compressions and will suggest possible treatment strategies. PMID:22110799

  9. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  10. Facial mimicry in its social setting

    PubMed Central

    Seibt, Beate; Mühlberger, Andreas; Likowski, Katja U.; Weyers, Peter

    2015-01-01

    In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting. PMID:26321970

  11. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  12. Facial neuroma masquerading as acoustic neuroma.

    PubMed

    Sayegh, Eli T; Kaur, Gurvinder; Ivan, Michael E; Bloch, Orin; Cheung, Steven W; Parsa, Andrew T

    2014-10-01

    Facial nerve neuromas are rare benign tumors that may be initially misdiagnosed as acoustic neuromas when situated near the auditory apparatus. We describe a patient with a large cystic tumor with associated trigeminal, facial, audiovestibular, and brainstem dysfunction, which was suspicious for acoustic neuroma on preoperative neuroimaging. Intraoperative investigation revealed a facial nerve neuroma located in the cerebellopontine angle and internal acoustic canal. Gross total resection of the tumor via retrosigmoid craniotomy was curative. Transection of the facial nerve necessitated facial reanimation 4 months later via hypoglossal-facial cross-anastomosis. Clinicians should recognize the natural history, diagnostic approach, and management of this unusual and mimetic lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Characterisation of dynamic couplings at lower limb residuum/socket interface using 3D motion capture.

    PubMed

    Tang, Jinghua; McGrath, Michael; Laszczak, Piotr; Jiang, Liudi; Bader, Dan L; Moser, David; Zahedi, Saeed

    2015-12-01

    Design and fitting of artificial limbs to lower limb amputees are largely based on the subjective judgement of the prosthetist. Understanding the science of three-dimensional (3D) dynamic coupling at the residuum/socket interface could potentially aid the design and fitting of the socket. A new method has been developed to characterise the 3D dynamic coupling at the residuum/socket interface using 3D motion capture based on a single case study of a trans-femoral amputee. The new model incorporated a Virtual Residuum Segment (VRS) and a Socket Segment (SS) which combined to form the residuum/socket interface. Angular and axial couplings between the two segments were subsequently determined. Results indicated a non-rigid angular coupling in excess of 10° in the quasi-sagittal plane and an axial coupling of between 21 and 35 mm. The corresponding angular couplings of less than 4° and 2° were estimated in the quasi-coronal and quasi-transverse plane, respectively. We propose that the combined experimental and analytical approach adopted in this case study could aid the iterative socket fitting process and could potentially lead to a new socket design. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Joint PET-MR respiratory motion models for clinical PET motion correction

    NASA Astrophysics Data System (ADS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David

    2016-09-01

    Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.

  15. Self-recognition of avatar motion: how do I know it's me?

    PubMed

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  16. Facial Transplantation Surgery Introduction

    PubMed Central

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea. PMID:26028914

  17. Facial transplantation surgery introduction.

    PubMed

    Eun, Seok-Chan

    2015-06-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea.

  18. Development of a universal measure of quadrupedal forelimb-hindlimb coordination using digital motion capture and computerised analysis.

    PubMed

    Hamilton, Lindsay; Franklin, Robin J M; Jeffery, Nick D

    2007-09-18

    Clinical spinal cord injury in domestic dogs provides a model population in which to test the efficacy of putative therapeutic interventions for human spinal cord injury. To achieve this potential a robust method of functional analysis is required so that statistical comparison of numerical data derived from treated and control animals can be achieved. In this study we describe the use of digital motion capture equipment combined with mathematical analysis to derive a simple quantitative parameter - 'the mean diagonal coupling interval' - to describe coordination between forelimb and hindlimb movement. In normal dogs this parameter is independent of size, conformation, speed of walking or gait pattern. We show here that mean diagonal coupling interval is highly sensitive to alterations in forelimb-hindlimb coordination in dogs that have suffered spinal cord injury, and can be accurately quantified, but is unaffected by orthopaedic perturbations of gait. Mean diagonal coupling interval is an easily derived, highly robust measurement that provides an ideal method to compare the functional effect of therapeutic interventions after spinal cord injury in quadrupeds.

  19. Facial reanimation with gracilis muscle transfer neurotized to cross-facial nerve graft versus masseteric nerve: a comparative study using the FACIAL CLIMA evaluating system.

    PubMed

    Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro

    2013-06-01

    Longstanding unilateral facial paralysis is best addressed with microneurovascular muscle transplantation. Neurotization can be obtained from the cross-facial or the masseter nerve. The authors present a quantitative comparison of both procedures using the FACIAL CLIMA system. Forty-seven patients with complete unilateral facial paralysis underwent reanimation with a free gracilis transplant neurotized to either a cross-facial nerve graft (group I, n=20) or to the ipsilateral masseteric nerve (group II, n=27). Commissural displacement and commissural contraction velocity were measured using the FACIAL CLIMA system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using the independent samples t test. Mean percentage of recovery of both parameters were compared between the groups using the independent samples t test. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I (p=0.001 and p=0.014, respectively) but not in group II. Intergroup comparisons showed that both commissural displacement and commissural contraction velocity were higher in group II, with significant differences for commissural displacement (p=0.048). Mean percentage of recovery of both parameters was higher in group II, with significant differences for commissural displacement (p=0.042). Free gracilis muscle transfer neurotized by the masseteric nerve is a reliable technique for reanimation of longstanding facial paralysis. Compared with cross-facial nerve graft neurotization, this technique provides better symmetry and a higher degree of recovery. Therapeutic, III.

  20. Are facial injuries really different? An observational cohort study comparing appearance concern and psychological distress in facial trauma and non-facial trauma patients.

    PubMed

    Rahtz, Emmylou; Bhui, Kamaldeep; Hutchison, Iain; Korszun, Ania

    2018-01-01

    Facial injuries are widely assumed to lead to stigma and significant psychosocial burden. Experimental studies of face perception support this idea, but there is very little empirical evidence to guide treatment. This study sought to address the gap. Data were collected from 193 patients admitted to hospital following facial or other trauma. Ninety (90) participants were successfully followed up 8 months later. Participants completed measures of appearance concern and psychological distress (post-traumatic stress symptoms (PTSS), depressive symptoms, anxiety symptoms). Participants were classified by site of injury (facial or non-facial injury). The overall levels of appearance concern were comparable to those of the general population, and there was no evidence of more appearance concern among people with facial injuries. Women and younger people were significantly more likely to experience appearance concern at baseline. Baseline and 8-month psychological distress, although common in the sample, did not differ according to the site of injury. Changes in appearance concern were, however, strongly associated with psychological distress at follow-up. We conclude that although appearance concern is severe among some people with facial injury, it is not especially different to those with non-facial injuries or the general public; changes in appearance concern, however, appear to correlate with psychological distress. We therefore suggest that interventions might focus on those with heightened appearance concern and should target cognitive bias and psychological distress. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  2. Intratemporal facial nerve ultrastructure in patients with idiopathic facial paralysis: viral infection evidence study.

    PubMed

    Florez, Rosangela Aló Maluza; Lang, Raquel; Veridiano, Adriano Mora; Zanini, Renato de Oliveira; Calió, Pedro Luiz; Simões, Ricardo Dos Santos; Testa, José Ricardo Gurgel

    2010-01-01

    The etiology of idiopathic peripheral facial palsy (IPFP) is still uncertain; however, some authors suggest the possibility of a viral infection. to analyze the ultrastructure of the facial nerve seeking viral evidences that might provide etiological data. We studied 20 patients with peripheral facial palsy (PFP), with moderate to severe FP, of both genders, between 18-60 years of age, from the Clinic of Facial Nerve Disorders. The patients were broken down into two groups - Study: eleven patients with IPFP and Control: nine patients with trauma or tumor-related PFP. The fragments were obtained from the facial nerve sheath or from fragments of its stumps - which would be discarded or sent to pathology exam during the facial nerve repair surgery. The removed tissue was fixed in 2% glutaraldehyde, and studied under Electronic Transmission Microscopy. In the study group we observed an intense repair cellular activity by increased collagen fibers, fibroblasts containing developed organelles, free of viral particles. In the control group this repair activity was not evident, but no viral particles were observed. There were no viral particles, and there were evidences of intense activity of repair or viral infection.

  3. Establishment of a new relationship between posed smile width and lower facial height: A cross-sectional study

    PubMed Central

    Abraham, Aby; George, Jinu; Peter, Elbe; Philip, Koshi; Chankramath, Rajesh; Johns, Dexton Antony; Bhaskar, Anitha

    2015-01-01

    Objective: The present study is intended to add a new parameter that would be useful in orthodontic clinical evaluation, treatment planning, and determination of vertical dimension (at occlusion). Materials and Methods: Standardized videographic recording of 79 subjects during posed smile was captured. Each video was then cut into 30 photos using the free studio software. The widest commissure-to-commissure posed smile frame (posed smile width [SW]) was selected as one of 10 or more frames showing an identical smile. Lower third of the face is measured from subnasale to soft tissue menton using a digital vernier caliper. Two values were then compared. Ratio between lower facial height and posed SW was calculated. Results: The co-relation between smiling width and lower facial height was found to be statistically significant (P < 0.01). The ratio of lower facial height and smiling width was calculated as 1.0016 with a standard deviation (SD) = 0.04 in males and 1.0301 with an SD = 0.07 in females. The difference between the mean lower facial height in males and females was statistically significant with a t = 10.231 and P = 0.000. The difference between the mean smiling width in males and females was also statistically significant with a t = 5.653 and P = 0.000. Conclusion: In class I subjects with pleasing appearance, normal facial proportions, normal overjet and overbite, and average Frankfort mandibular angle, the lower facial height (subnasale to soft tissue menton) is equal to posed SW. PMID:26430369

  4. Combining facial dynamics with appearance for age estimation.

    PubMed

    Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo

    2015-06-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.

  5. Peripheral facial palsy in children.

    PubMed

    Yılmaz, Unsal; Cubukçu, Duygu; Yılmaz, Tuba Sevim; Akıncı, Gülçin; Ozcan, Muazzez; Güzel, Orkide

    2014-11-01

    The aim of this study is to evaluate the types and clinical characteristics of peripheral facial palsy in children. The hospital charts of children diagnosed with peripheral facial palsy were reviewed retrospectively. A total of 81 children (42 female and 39 male) with a mean age of 9.2 ± 4.3 years were included in the study. Causes of facial palsy were 65 (80.2%) idiopathic (Bell palsy) facial palsy, 9 (11.1%) otitis media/mastoiditis, and tumor, trauma, congenital facial palsy, chickenpox, Melkersson-Rosenthal syndrome, enlarged lymph nodes, and familial Mediterranean fever (each 1; 1.2%). Five (6.1%) patients had recurrent attacks. In patients with Bell palsy, female/male and right/left ratios were 36/29 and 35/30, respectively. Of them, 31 (47.7%) had a history of preceding infection. The overall rate of complete recovery was 98.4%. A wide variety of disorders can present with peripheral facial palsy in children. Therefore, careful investigation and differential diagnosis is essential. © The Author(s) 2013.

  6. Facial expressions and pair bonds in hylobatids.

    PubMed

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony

  7. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  8. Impact of facial defect reconstruction on attractiveness and negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E

    2015-06-01

    Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Imaging the Facial Nerve: A Contemporary Review

    PubMed Central

    Gupta, Sachin; Mends, Francine; Hagiwara, Mari; Fatterpekar, Girish; Roehm, Pamela C.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell's palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers. PMID:23766904

  10. Local Dynamic Stability Assessment of Motion Impaired Elderly Using Electronic Textile Pants.

    PubMed

    Liu, Jian; Lockhart, Thurmon E; Jones, Mark; Martin, Tom

    2008-10-01

    A clear association has been demonstrated between gait stability and falls in the elderly. Integration of wearable computing and human dynamic stability measures into home automation systems may help differentiate fall-prone individuals in a residential environment. The objective of the current study was to evaluate the capability of a pair of electronic textile (e-textile) pants system to assess local dynamic stability and to differentiate motion-impaired elderly from their healthy counterparts. A pair of e-textile pants comprised of numerous e-TAGs at locations corresponding to lower extremity joints was developed to collect acceleration, angular velocity and piezoelectric data. Four motion-impaired elderly together with nine healthy individuals (both young and old) participated in treadmill walking with a motion capture system simultaneously collecting kinematic data. Local dynamic stability, characterized by maximum Lyapunov exponent, was computed based on vertical acceleration and angular velocity at lower extremity joints for the measurements from both e-textile and motion capture systems. Results indicated that the motion-impaired elderly had significantly higher maximum Lyapunov exponents (computed from vertical acceleration data) than healthy individuals at the right ankle and hip joints. In addition, maximum Lyapunov exponents assessed by the motion capture system were found to be significantly higher than those assessed by the e-textile system. Despite the difference between these measurement techniques, attaching accelerometers at the ankle and hip joints was shown to be an effective sensor configuration. It was concluded that the e-textile pants system, via dynamic stability assessment, has the potential to identify motion-impaired elderly.

  11. Facial nerve palsy due to birth trauma

    MedlinePlus

    Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...

  12. Facial Nerve Paralysis due to Chronic Otitis Media: Prognosis in Restoration of Facial Function after Surgical Intervention

    PubMed Central

    Kim, Jin; Jung, Gu-Hyun; Park, See-Young

    2012-01-01

    Purpose Facial paralysis is an uncommon but significant complication of chronic otitis media (COM). Surgical eradication of the disease is the most viable way to overcome facial paralysis therefrom. In an effort to guide treatment of this rare complication, we analyzed the prognosis of facial function after surgical treatment. Materials and Methods A total of 3435 patients with COM, who underwent various otologic surgeries throughout a period of 20 years, were analyzed retrospectively. Forty six patients (1.33%) had facial nerve paralysis caused by COM. We analyzed prognostic factors including delay of surgery, the extent of disease, presence or absence of cholesteatoma and the type of surgery affecting surgical outcomes. Results Surgical intervention had a good effect on the restoration of facial function in cases of shorter duration of onset of facial paralysis to surgery and cases of sudden onset, without cholesteatoma. No previous ear surgery and healthy bony labyrinth indicated a good postoperative prognosis. Conclusion COM causing facial paralysis is most frequently due to cholesteatoma and the presence of cholesteatoma decreased the effectiveness of surgical treatment and indicated a poor prognosis after surgery. In our experience, early surgical intervention can be crucial to recovery of facial function. To prevent recurrent cholesteatoma, which leads to local destruction of the facial nerve, complete eradication of the disease in one procedure cannot be overemphasized for the treatment of patients with COM. PMID:22477011

  13. Thermophoretic motion behavior of submicron particles in boundary-layer-separation flow around a droplet.

    PubMed

    Wang, Ao; Song, Qiang; Ji, Bingqiang; Yao, Qiang

    2015-12-01

    As a key mechanism of submicron particle capture in wet deposition and wet scrubbing processes, thermophoresis is influenced by the flow and temperature fields. Three-dimensional direct numerical simulations were conducted to quantify the characteristics of the flow and temperature fields around a droplet at three droplet Reynolds numbers (Re) that correspond to three typical boundary-layer-separation flows (steady axisymmetric, steady plane-symmetric, and unsteady plane-symmetric flows). The thermophoretic motion of submicron particles was simulated in these cases. Numerical results show that the motion of submicron particles around the droplet and the deposition distribution exhibit different characteristics under three typical flow forms. The motion patterns of particles are dependent on their initial positions in the upstream and flow forms. The patterns of particle motion and deposition are diversified as Re increases. The particle motion pattern, initial position of captured particles, and capture efficiency change periodically, especially during periodic vortex shedding. The key effects of flow forms on particle motion are the shape and stability of the wake behind the droplet. The drag force of fluid and the thermophoretic force in the wake contribute jointly to the deposition of submicron particles after the boundary-layer separation around a droplet.

  14. Use of Facial Recognition Software to Identify Disaster Victims With Facial Injuries.

    PubMed

    Broach, John; Yong, Rothsovann; Manuell, Mary-Elise; Nichols, Constance

    2017-10-01

    After large-scale disasters, victim identification frequently presents a challenge and a priority for responders attempting to reunite families and ensure proper identification of deceased persons. The purpose of this investigation was to determine whether currently commercially available facial recognition software can successfully identify disaster victims with facial injuries. Photos of 106 people were taken before and after application of moulage designed to simulate traumatic facial injuries. These photos as well as photos from volunteers' personal photo collections were analyzed by using facial recognition software to determine whether this technology could accurately identify a person with facial injuries. The study results suggest that a responder could expect to get a correct match between submitted photos and photos of injured patients between 39% and 45% of the time and a much higher percentage of correct returns if submitted photos were of optimal quality with percentages correct exceeding 90% in most situations. The present results suggest that the use of this software would provide significant benefit to responders. Although a correct result was returned only 40% of the time, this would still likely represent a benefit for a responder trying to identify hundreds or thousands of victims. (Disaster Med Public Health Preparedness. 2017;11:568-572).

  15. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  16. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.

    PubMed

    Reinl, Maren; Bartels, Andreas

    2014-11-15

    Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Noninvasive Facial Rejuvenation. Part 1: Patient-Directed

    PubMed Central

    Commander, Sarah Jane; Chang, Daniel; Fakhro, Abdulla; Nigro, Marjory G.; Lee, Edward I.

    2016-01-01

    A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin and rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but the patients are in control of this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the first in a three-part series describing noninvasive facial rejuvenation. The authors focus on patient-directed facial rejuvenation. It is important, however, to emphasize that even in a patient-directed modality, a physician's involvement through education and guidance is integral to its success. PMID:27478421

  18. Definition of anatomical zero positions for assessing shoulder pose with 3D motion capture during bilateral abduction of the arms.

    PubMed

    Rettig, Oliver; Krautwurst, Britta; Maier, Michael W; Wolf, Sebastian I

    2015-12-09

    Surgical interventions at the shoulder may alter function of the shoulder complex. Clinically, the outcome can be assessed by universal goniometry. Marker-based motion capture may not resemble these results due to differing angle definitions. The clinical inspection of bilateral arm abduction for assessing shoulder dysfunction is performed with a marker based 3D optical measurement method. An anatomical zero position of shoulder pose is proposed to determine absolute angles according to the Neutral-0-Method as used in orthopedic context. Static shoulder positions are documented simultaneously by 3D marker tracking and universal goniometry in 8 young and healthy volunteers. Repetitive bilateral arm abduction movements of at least 150° range of motion are monitored. Similarly a subject with gleno-humeral osteoarthritis is monitored for demonstrating the feasibility of the method and to illustrate possible shoulder dysfunction effects. With mean differences of less than 2°, the proposed anatomical zero position results in good agreement between shoulder elevation/depression angles determined by 3D marker tracking and by universal goniometry in static positions. Lesser agreement is found for shoulder pro-/retraction with systematic deviations of up to 6°. In the bilateral arm abduction movements the volunteers perform a common and specific pattern in clavicula-thoracic and gleno-humeral motion with maximum shoulder angles of 32° elevation, 5° depression and 45° protraction, respectively, whereas retraction is hardly reached. Further, they all show relevant out of (frontal) plane motion with anteversion angles of 30° in overhead position (maximum abduction). With increasing arm anteversion the shoulder is increasingly retroverted, with a maximum of 20° retroversion. The subject with gleno-humeral osteoarthritis shows overall less shoulder abduction range of motion but with increased out-of-plane movement during abduction. The proposed anatomical zero definition

  19. Motion correction for improved estimation of heart rate using a visual spectrum camera

    NASA Astrophysics Data System (ADS)

    Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki

    2017-05-01

    Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.

  20. Pediatric facial injuries: It's management.

    PubMed

    Singh, Geeta; Mohammad, Shadab; Pal, U S; Hariram; Malkunje, Laxman R; Singh, Nimisha

    2011-07-01

    Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention.

  1. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree

  2. Complications in Pediatric Facial Fractures

    PubMed Central

    Chao, Mimi T.; Losee, Joseph E.

    2009-01-01

    Despite recent advances in the diagnosis, treatment, and prevention of pediatric facial fractures, little has been published on the complications of these fractures. The existing literature is highly variable regarding both the definition and the reporting of adverse events. Although the incidence of pediatric facial fractures is relative low, they are strongly associated with other serious injuries. Both the fractures and their treatment may have long-term consequence on growth and development of the immature face. This article is a selective review of the literature on facial fracture complications with special emphasis on the complications unique to pediatric patients. We also present our classification system to evaluate adverse outcomes associated with pediatric facial fractures. Prospective, long-term studies are needed to fully understand and appreciate the complexity of treating children with facial fractures and determining the true incidence, subsequent growth, and nature of their complications. PMID:22110803

  3. Evaluation of facial attractiveness in black people according to the subjective facial analysis criteria.

    PubMed

    Melo, Andréa Reis de; Conti, Ana Cláudia de Castro Ferreira; Almeida-Pedrin, Renata Rodrigues; Didier, Victor; Valarelli, Danilo Pinelli; Capelozza Filho, Leopoldino

    2017-02-01

    The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.

  4. Operant conditioning of facial displays of pain.

    PubMed

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p < .001) but not in the yoked control group (p > .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  5. Facial Displays Are Tools for Social Influence.

    PubMed

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  7. Oral motor and electromyographic characterization of adults with facial fractures: a comparison between different fracture severities.

    PubMed

    da Silva, Amanda Pagliotto; Sassi, Fernanda Chiarion; Bastos, Endrigo; Alonso, Nivaldo; de Andrade, Claudia Regina Furquim

    2017-05-01

    To characterize the oral motor system of adults with facial injuries and to compare the oral motor performance/function between two different groups. An observational, descriptive, cross-sectional study was conducted in 38 patients presenting with facial trauma who were assigned to the Division of Orofacial Myology of a Brazilian School Hospital. Patients were divided into two groups: Group 1 (G1) consisted of 19 patients who were submitted to open reduction of at least one facial fracture, and Group 2 (G2) consisted of 19 individuals who were submitted to closed fracture reduction with maxillomandibular fixation. For comparison purposes, a group of 19 healthy volunteers was recruited. All participants underwent a clinical assessment that included an oral motor evaluation, assessment of the mandibular range of motions, and electromyographic assessment of the masticatory muscles. Clinical assessment of the oral motor organs indicated that G1 and G2 presented deficits related to the posture, position, and mobility of the oral motor organs. Patients also presented limited mandibular ranges of movement. Deficits were greater for individuals in G1, especially for maximal incisor opening. Additionally, patients in G1 and G2 presented a similar electromyographic profile of the masticatory muscles (i.e., patients with facial fractures presented lower overall muscle activity and significant asymmetrical activity of the masseter muscle during maximum voluntary teeth clenching). Patients in G1 and G2 presented similar functional deficits after fracture treatment. The severity of facial fractures did not influence muscle function/performance 4 months after the correction of fractures.

  8. Oral motor and electromyographic characterization of adults with facial fractures: a comparison between different fracture severities

    PubMed Central

    da Silva, Amanda Pagliotto; Sassi, Fernanda Chiarion; Bastos, Endrigo; Alonso, Nivaldo; de Andrade, Claudia Regina Furquim

    2017-01-01

    OBJECTIVES: To characterize the oral motor system of adults with facial injuries and to compare the oral motor performance/function between two different groups. METHODS: An observational, descriptive, cross-sectional study was conducted in 38 patients presenting with facial trauma who were assigned to the Division of Orofacial Myology of a Brazilian School Hospital. Patients were divided into two groups: Group 1 (G1) consisted of 19 patients who were submitted to open reduction of at least one facial fracture, and Group 2 (G2) consisted of 19 individuals who were submitted to closed fracture reduction with maxillomandibular fixation. For comparison purposes, a group of 19 healthy volunteers was recruited. All participants underwent a clinical assessment that included an oral motor evaluation, assessment of the mandibular range of motions, and electromyographic assessment of the masticatory muscles. RESULTS: Clinical assessment of the oral motor organs indicated that G1 and G2 presented deficits related to the posture, position, and mobility of the oral motor organs. Patients also presented limited mandibular ranges of movement. Deficits were greater for individuals in G1, especially for maximal incisor opening. Additionally, patients in G1 and G2 presented a similar electromyographic profile of the masticatory muscles (i.e., patients with facial fractures presented lower overall muscle activity and significant asymmetrical activity of the masseter muscle during maximum voluntary teeth clenching). CONCLUSION: Patients in G1 and G2 presented similar functional deficits after fracture treatment. The severity of facial fractures did not influence muscle function/performance 4 months after the correction of fractures. PMID:28591339

  9. Does Facial Resemblance Enhance Cooperation?

    PubMed Central

    Giang, Trang; Bell, Raoul; Buchner, Axel

    2012-01-01

    Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095

  10. Facial nerve paralysis secondary to occult malignant neoplasms.

    PubMed

    Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J

    2004-04-01

    This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.

  11. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Pediatric facial injuries: It's management

    PubMed Central

    Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram; Malkunje, Laxman R.; Singh, Nimisha

    2011-01-01

    Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. Results and Conclusion: In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention. PMID:22639504

  13. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  14. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  15. Effect of facial neuromuscular re-education on facial symmetry in patients with Bell's palsy: a randomized controlled trial.

    PubMed

    Manikandan, N

    2007-04-01

    To determine the effect of facial neuromuscular re-education over conventional therapeutic measures in improving facial symmetry in patients with Bell's palsy. Randomized controlled trial. Neurorehabilitation unit. Fifty-nine patients diagnosed with Bell's palsy were included in the study after they met the inclusion criteria. Patients were randomly divided into two groups: control (n = 30) and experimental (n = 29). Control group patients received conventional therapeutic measures while the facial neuromuscular re-education group patients received techniques that were tailored to each patient in three sessions per day for six days per week for a period of two weeks. All the patients were evaluated using a Facial Grading Scale before treatment and after three months. The Facial Grading Scale scores showed significant improvement in both control (mean 32 (range 9.7-54) to 54.5 (42.2-71.7)) and the experimental (33 (18-43.5) to 66 (54-76.7)) group. Facial Grading Scale change scores showed that experimental group (27.5 (20-43.77)) improved significantly more than the control group (16.5 (12.2-24.7)). Analysis of Facial Grading Scale subcomponents did not show statistical significance, except in the movement score (12 (8-16) to 24 (12-18)). Individualized facial neuromuscular re-education is more effective in improving facial symmetry in patients with Bell's palsy than conventional therapeutic measures.

  16. Use of a Y-tube conduit after facial nerve injury reduces collateral axonal branching at the lesion site but neither reduces polyinnervation of motor endplates nor improves functional recovery.

    PubMed

    Hizay, Arzu; Ozsoy, Umut; Demirel, Bahadir Murat; Ozsoy, Ozlem; Angelova, Srebrina K; Ankerne, Janina; Sarikcioglu, Sureyya Bilmen; Dunlop, Sarah A; Angelov, Doychin N; Sarikcioglu, Levent

    2012-06-01

    Despite increased understanding of peripheral nerve regeneration, functional recovery after surgical repair remains disappointing. A major contributing factor is the extensive collateral branching at the lesion site, which leads to inaccurate axonal navigation and aberrant reinnervation of targets. To determine whether the Y tube reconstruction improved axonal regrowth and whether this was associated with improved function. We used a Y-tube conduit with the aim of improving navigation of regenerating axons after facial nerve transection in rats. Retrograde labeling from the zygomatic and buccal branches showed a halving in the number of double-labeled facial motor neurons (15% vs 8%; P < .05) after Y tube reconstruction compared with facial-facial anastomosis coaptation. However, in both surgical groups, the proportion of polyinnervated motor endplates was similar (≈ 30%; P > .05), and video-based motion analysis of whisking revealed similarly poor function. Although Y-tube reconstruction decreases axonal branching at the lesion site and improves axonal navigation compared with facial-facial anastomosis coaptation, it fails to promote monoinnervation of motor endplates and confers no functional benefit.

  17. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  18. Quantitative evaluation of toothbrush and arm-joint motion during tooth brushing.

    PubMed

    Inada, Emi; Saitoh, Issei; Yu, Yong; Tomiyama, Daisuke; Murakami, Daisuke; Takemoto, Yoshihiko; Morizono, Ken; Iwasaki, Tomonori; Iwase, Yoko; Yamasaki, Youichi

    2015-07-01

    It is very difficult for dental professionals to objectively assess tooth brushing skill of patients, because an obvious index to assess the brushing motion of patients has not been established. The purpose of this study was to quantitatively evaluate toothbrush and arm-joint motion during tooth brushing. Tooth brushing motion, performed by dental hygienists for 15 s, was captured using a motion-capture system that continuously calculates the three-dimensional coordinates of object's motion relative to the floor. The dental hygienists performed the tooth brushing on the buccal and palatal sides of their right and left upper molars. The frequencies and power spectra of toothbrush motion and joint angles of the shoulder, elbow, and wrist were calculated and analyzed statistically. The frequency of toothbrush motion was higher on the left side (both buccal and palatal areas) than on the right side. There were no significant differences among joint angle frequencies within each brushing area. The inter- and intra-individual variations of the power spectrum of the elbow flexion angle when brushing were smaller than for any of the other angles. This study quantitatively confirmed that dental hygienists have individual distinctive rhythms during tooth brushing. All arm joints moved synchronously during brushing, and tooth brushing motion was controlled by coordinated movement of the joints. The elbow generated an individual's frequency through a stabilizing movement. The shoulder and wrist control the hand motion, and the elbow generates the cyclic rhythm during tooth brushing.

  19. Perceived functional impact of abnormal facial appearance.

    PubMed

    Rankin, Marlene; Borah, Gregory L

    2003-06-01

    Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial

  20. Method for measuring tri-axial lumbar motion angles using wearable sheet stretch sensors

    PubMed Central

    Nakamoto, Hiroyuki; Yamaji, Tokiya; Ootaka, Hideo; Bessho, Yusuke; Nakamura, Ryo; Ono, Rei

    2017-01-01

    Background Body movements, such as trunk flexion and rotation, are risk factors for low back pain in occupational settings, especially in healthcare workers. Wearable motion capture systems are potentially useful to monitor lower back movement in healthcare workers to help avoid the risk factors. In this study, we propose a novel system using sheet stretch sensors and investigate the system validity for estimating lower back movement. Methods Six volunteers (female:male = 1:1, mean age: 24.8 ± 4.0 years, height 166.7 ± 5.6 cm, weight 56.3 ± 7.6 kg) participated in test protocols that involved executing seven types of movements. The movements were three uniaxial trunk movements (i.e., trunk flexion-extension, trunk side-bending, and trunk rotation) and four multiaxial trunk movements (i.e., flexion + rotation, flexion + side-bending, side-bending + rotation, and moving around the cranial–caudal axis). Each trial lasted for approximately 30 s. Four stretch sensors were attached to each participant’s lower back. The lumbar motion angles were estimated using simple linear regression analysis based on the stretch sensor outputs and compared with those obtained by the optical motion capture system. Results The estimated lumbar motion angles showed a good correlation with the actual angles, with correlation values of r = 0.68 (SD = 0.35), r = 0.60 (SD = 0.19), and r = 0.72 (SD = 0.18) for the flexion-extension, side bending, and rotation movements, respectively (all P < 0.05). The estimation errors in all three directions were less than 3°. Conclusion The stretch sensors mounted on the back provided reasonable estimates of the lumbar motion angles. The novel motion capture system provided three directional angles without capture space limits. The wearable system possessed great potential to monitor the lower back movement in healthcare workers and helping prevent low back pain. PMID:29020053

  1. Cranio-facial clefts in pre-hispanic America.

    PubMed

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  3. [Surgical treatment in otogenic facial nerve palsy].

    PubMed

    Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng

    2008-06-01

    To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.

  4. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Human motion retrieval from hand-drawn sketch.

    PubMed

    Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee

    2012-05-01

    The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.

  6. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  7. An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    The assessment of facial mimicry is important in forensic anthropology; in addition, the application of modern 3D image acquisition systems may help for the analysis of facial surfaces. This study aimed at exposing a novel method for comparing 3D profiles in different facial expressions. Ten male adults, aged between 30 and 40 years, underwent acquisitions by stereophotogrammetry (VECTRA-3D ® ) with different expressions (neutral, happy, sad, angry, surprised). The acquisition of each individual was then superimposed on the neutral one according to nine landmarks, and the root mean square (RMS) value between the two expressions was calculated. The highest difference in comparison with the neutral standard was shown by the happy expression (RMS 4.11 mm), followed by the surprised (RMS 2.74 mm), sad (RMS 1.3 mm), and angry ones (RMS 1.21 mm). This pilot study shows that the 3D-3D superimposition may provide reliable results concerning facial alteration due to mimicry. © 2016 American Academy of Forensic Sciences.

  8. Acneiform facial eruptions

    PubMed Central

    Cheung, Melody J.; Taher, Muba; Lauzon, Gilles J.

    2005-01-01

    OBJECTIVE To summarize clinical recognition and current management strategies for four types of acneiform facial eruptions common in young women: acne vulgaris, rosacea, folliculitis, and perioral dermatitis. QUALITY OF EVIDENCE Many randomized controlled trials (level I evidence) have studied treatments for acne vulgaris over the years. Treatment recommendations for rosacea, folliculitis, and perioral dermatitis are based predominantly on comparison and open-label studies (level II evidence) as well as expert opinion and consensus statements (level III evidence). MAIN MESSAGE Young women with acneiform facial eruptions often present in primary care. Differentiating between morphologically similar conditions is often difficult. Accurate diagnosis is important because treatment approaches are different for each disease. CONCLUSION Careful visual assessment with an appreciation for subtle morphologic differences and associated clinical factors will help with diagnosis of these common acneiform facial eruptions and lead to appropriate management. PMID:15856972

  9. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  10. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  11. Facial Anthropometric Norms among Kosovo - Albanian Adults.

    PubMed

    Staka, Gloria; Asllani-Hoxha, Flurije; Bimbashi, Venera

    2017-09-01

    The development of an anthropometric craniofacial database is a necessary multidisciplinary proposal. The aim of this study was to establish facial anthropometric norms and to investigate into sexual dimorphism in facial variables among Kosovo Albanian adults. The sample included 204 students of Dental School, Faculty of Medicine, University of Pristina. Using direct anthropometry, a series of 8 standard facial measurements was taken on each subject with digital caliper with an accuracy of 0.01 mm (Boss, Hamburg-Germany). The normative data and percentile rankings were calculated. Gender differences in facial variables were analyzed using t- test for independent samples (p<0.05). The index of sexual dimorphism (ISD) and percentage of sexual dimorphism were calculated for each facial measurement. ormative data for all facial anthropometric measurements in males were higher than in females. Male average norms compared with the female average norms differed significantly from each other (p>0.05).The highest index of sexual dimorphism (ISD) was found for the lower facial height 1.120, for which the highest percentage of sexual dimorphism, 12.01%., was also found. The lowest ISD was found for intercanthal width, 1.022, accompanied with the lowest percentage of sexual dimorphism, 2.23%. The obtained results have established the facial anthropometric norms among Kosovo Albanian adults. Sexual dimorphism has been confirmed for each facial measurement.

  12. Facial Specialty. Teacher Edition. Cosmetology Series.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This publication is one of a series of curriculum guides designed to direct and support instruction in vocational cosmetology programs in the State of Oklahoma. It contains seven units for the facial specialty: identifying enemies of the skin, using aromatherapy on the skin, giving facials without the aid of machines, giving facials with the aid…

  13. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  14. [Idiopathic facial paralysis in children].

    PubMed

    Achour, I; Chakroun, A; Ayedi, S; Ben Rhaiem, Z; Mnejja, M; Charfeddine, I; Hammami, B; Ghorbel, A

    2015-05-01

    Idiopathic facial palsy is the most common cause of facial nerve palsy in children. Controversy exists regarding treatment options. The objectives of this study were to review the epidemiological and clinical characteristics as well as the outcome of idiopathic facial palsy in children to suggest appropriate treatment. A retrospective study was conducted on children with a diagnosis of idiopathic facial palsy from 2007 to 2012. A total of 37 cases (13 males, 24 females) with a mean age of 13.9 years were included in this analysis. The mean duration between onset of Bell's palsy and consultation was 3 days. Of these patients, 78.3% had moderately severe (grade IV) or severe paralysis (grade V on the House and Brackmann grading). Twenty-seven patients were treated in an outpatient context, three patients were hospitalized, and seven patients were treated as outpatients and subsequently hospitalized. All patients received corticosteroids. Eight of them also received antiviral treatment. The complete recovery rate was 94.6% (35/37). The duration of complete recovery was 7.4 weeks. Children with idiopathic facial palsy have a very good prognosis. The complete recovery rate exceeds 90%. However, controversy exists regarding treatment options. High-quality studies have been conducted on adult populations. Medical treatment based on corticosteroids alone or combined with antiviral treatment is certainly effective in improving facial function outcomes in adults. In children, the recommendation for prescription of steroids and antiviral drugs based on adult treatment appears to be justified. Randomized controlled trials in the pediatric population are recommended to define a strategy for management of idiopathic facial paralysis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  15. Reconstruction of facial nerve injuries in children.

    PubMed

    Fattah, Adel; Borschel, Gregory H; Zuker, Ron M

    2011-05-01

    Facial nerve trauma is uncommon in children, and many spontaneously recover some function; nonetheless, loss of facial nerve activity leads to functional impairment of ocular and oral sphincters and nasal orifice. In many cases, the impediment posed by facial asymmetry and reduced mimetic function more significantly affects the child's psychosocial interactions. As such, reconstruction of the facial nerve affords great benefits in quality of life. The therapeutic strategy is dependent on numerous factors, including the cause of facial nerve injury, the deficit, the prognosis for recovery, and the time elapsed since the injury. The options for treatment include a diverse range of surgical techniques including static lifts and slings, nerve repairs, nerve grafts and nerve transfers, regional, and microvascular free muscle transfer. We review our strategies for addressing facial nerve injuries in children.

  16. Contemporary solutions for the treatment of facial nerve paralysis.

    PubMed

    Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R

    2015-06-01

    After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.

  17. Dense mesh sampling for video-based facial animation

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  18. Improving posttraumatic facial scars.

    PubMed

    Ardeshirpour, Farhad; Shaye, David A; Hilger, Peter A

    2013-10-01

    Posttraumatic soft-tissue injuries of the face are often the most lasting sequelae of facial trauma. The disfigurement of posttraumatic scarring lies in both their physical deformity and psychosocial ramifications. This review outlines a variety of techniques to improve facial scars and limit their lasting effects. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Facial Animations: Future Research Directions & Challenges

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  20. Full-face motorcycle helmet protection from facial impacts: an investigation using THOR dummy impacts and SIMon finite element head model.

    PubMed

    Whyte, Thomas; Gibson, Tom; Eager, David; Milthorpe, Bruce

    2017-06-01

    Facial impacts are both common and injurious for helmeted motorcyclists who crash; however, there is no facial impact requirement in major motorcycle helmet standards. This study examined the effect of full-face motorcycle helmet protection on brain injury risk in facial impacts using a test device with biofidelic head and neck motion. A preliminary investigation of energy absorbing foam in the helmet chin bar was carried out. Flat-faced rigid pendulum impacts were performed on a THOR dummy in an unprotected (no helmet) and protected mode (two full-face helmet conditions). The head responses of the dummy were input into the simulated injury monitor finite element head model to analyse the risk of brain injury in these impacts. Full-face helmet protection provides a significant reduction in brain injury risk in facial impacts at increasing impact speeds compared with an unprotected rider (p<0.05). The effect of low-density crushable foam added to the chin bar could not be distinguished from an unpadded chin bar impact. Despite the lack of an impact attenuation requirement for the face, full-face helmets do provide a reduction in head injury risk to the wearer in facial impacts. The specific helmet design factors that influence head injury risk in facial impacts need further investigation if improved protection for helmeted motorcyclists is to be achieved. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  3. Reconstruction of facial nerve after radical parotidectomy.

    PubMed

    Renkonen, Suvi; Sayed, Farid; Keski-Säntti, Harri; Ylä-Kotola, Tuija; Bäck, Leif; Suominen, Sinikka; Kanerva, Mervi; Mäkitie, Antti A

    2015-01-01

    Most patients benefitted from immediate facial nerve grafting after radical parotidectomy. Even weak movement is valuable and can be augmented with secondary static operations. Post-operative radiotherapy does not seem to affect the final outcome of facial function. During radical parotidectomy, the sacrifice of the facial nerve results in severe disfigurement of the face. Data on the principles and outcome of facial nerve reconstruction and reanimation after radical parotidectomy are limited and no consensus exists on the best practice. This study retrospectively reviewed all patients having undergone radical parotidectomy and immediate facial nerve reconstruction with a free, non-vascularized nerve graft at the Helsinki University Hospital, Helsinki, Finland during the years 1990-2010. There were 31 patients (18 male; mean age = 54.7 years; range = 30-82) and 23 of them had a sufficient follow-up time. Facial nerve function recovery was seen in 18 (78%) of the 23 patients with a minimum of 2-year follow-up and adequate reporting available. Only slight facial movement was observed in five (22%), moderate or good movement in nine (39%), and excellent movement in four (17%) patients. Twenty-two (74%) patients received post-operative radiotherapy and 16 (70%) of them had some recovery of facial nerve function. Nineteen (61%) patients needed secondary static reanimation of the face.

  4. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  5. Geometric Brownian Motion with Tempered Stable Waiting Times

    NASA Astrophysics Data System (ADS)

    Gajda, Janusz; Wyłomańska, Agnieszka

    2012-08-01

    One of the earliest system that was used to asset prices description is Black-Scholes model. It is based on geometric Brownian motion and was used as a tool for pricing various financial instruments. However, when it comes to data description, geometric Brownian motion is not capable to capture many properties of present financial markets. One can name here for instance periods of constant values. Therefore we propose an alternative approach based on subordinated tempered stable geometric Brownian motion which is a combination of the popular geometric Brownian motion and inverse tempered stable subordinator. In this paper we introduce the mentioned process and present its main properties. We propose also the estimation procedure and calibrate the analyzed system to real data.

  6. Are recent empirical directivity models sufficient in capturing near-fault directivity effect?

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Shin; Cotton, Fabrice; Pagani, Marco; Weatherill, Graeme; Reshi, Owais; Mai, Martin

    2017-04-01

    It has been widely observed that the ground motion variability in the near field can be significantly higher than that commonly reported in published GMPEs, and this has been suggested to be a consequence of directivity. To capture the spatial variation in ground motion amplitude and frequency caused by the near-fault directivity effect, several models for engineering applications have been developed using empirical or, more recently, the combination of empirical and simulation data. Many research works have indicated that the large velocity pulses mainly observed in the near-field are primarily related to slip heterogeneity (i.e., asperities), suggesting that the slip heterogeneity is a more dominant controlling factor than the rupture velocity or source rise time function. The first generation of broadband directivity models for application in ground motion prediction do not account for heterogeneity of slip and rupture speed. With the increased availability of strong motion recordings (e.g., NGA-West 2 database) in the near-fault region, the directivity models moved from broadband to narrowband models to include the magnitude dependence of the period of the rupture directivity pulses, wherein the pulses are believed to be closely related to the heterogeneity of slip distribution. After decades of directivity models development, does the latest generation of models - i.e. the one including narrowband directivity models - better capture the near-fault directivity effects, particularly in presence of strong slip heterogeneity? To address this question, a set of simulated motions for an earthquake rupture scenario, with various kinematic slip models and hypocenter locations, are used as a basis for a comparison with the directivity models proposed by the NGA-West 2 project for application with ground motion prediction equations incorporating a narrowband directivity model. The aim of this research is to gain better insights on the accuracy of narrowband directivity

  7. How to Avoid Facial Nerve Injury in Mastoidectomy?

    PubMed Central

    Ryu, Nam-Gyu

    2016-01-01

    Unexpected iatrogenic facial nerve paralysis not only affects facial disfiguration, but also imposes a devastating effect on the social, psychological, and economic aspects of an affected person's life at once. The aims of this study were to postulate where surgeons had mistakenly drilled or where obscured by granulations or by fibrous bands and to look for surgical approach with focused on the safety of facial nerve in mastoid surgery. We had found 14 cases of iatrogenic facial nerve injury (IFNI) during mastoid surgery for 5 years in Korea. The medical records of all the patients were obtained and analyzed injured site of facial nerve segment with surgical technique of mastoidectomy. Eleven patients underwent facial nerve exploration and three patients had conservative management. 43% (6 cases) of iatrogenic facial nerve injuries had occurred in tympanic segment, 28.5% (4 cases) of injuries in second genu combined with tympanic segment, and 28.5% (4 cases) of injuries in mastoid segment. Surgeons should try to identify the facial nerve using available landmarks and be kept in mind the anomalies of the facial nerve. With use of intraoperative facial nerve monitoring, the avoidance of in order to avoid IFNI would be possible in more cases. Many authors emphasized the importance of intraoperative facial nerve monitoring, even in primary otologic surgery. However, anatomical understanding of intratemporal landmarks with meticulous dissection could not be emphasized as possible to prevent IFNI. PMID:27626078

  8. Preservation of Facial Nerve Function Repaired by Using Fibrin Glue-Coated Collagen Fleece for a Totally Transected Facial Nerve during Vestibular Schwannoma Surgery

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho

    2014-01-01

    Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825

  9. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.

    PubMed

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-10-01

    Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.

  10. Facial transplantation: A concise update

    PubMed Central

    Barrera-Pulido, Fernando; Gomez-Cia, Tomas; Sicilia-Castro, Domingo; Garcia-Perla-Garcia, Alberto; Gacto-Sanchez, Purificacion; Hernandez-Guisado, Jose-Maria; Lagares-Borrego, Araceli; Narros-Gimenez, Rocio; Gonzalez-Padilla, Juan D.

    2013-01-01

    Objectives: Update on clinical results obtained by the first worldwide facial transplantation teams as well as review of the literature concerning the main surgical, immunological, ethical, and follow-up aspects described on facial transplanted patients. Study design: MEDLINE search of articles published on “face transplantation” until March 2012. Results: Eighteen clinical cases were studied. The mean patient age was 37.5 years, with a higher prevalence of men. Main surgical indication was gunshot injuries (6 patients). All patients had previously undergone multiple conventional surgical reconstructive procedures which had failed. Altogether 8 transplant teams belonging to 4 countries participated. Thirteen partial face transplantations and 5 full face transplantations have been performed. Allografts are varied according to face anatomical components and the amount of skin, muscle, bone, and other tissues included, though all were grafted successfully and remained viable without significant postoperative surgical complications. The patient with the longest follow-up was 5 years. Two patients died 2 and 27 months after transplantation. Conclusions: Clinical experience has demonstrated the feasibility of facial transplantation as a valuable reconstructive option, but it still remains considered as an experimental procedure with unresolved issues to settle down. Results show that from a clinical, technical, and immunological standpoint, facial transplantation has achieved functional, aesthetic, and social rehabilitation in severely facial disfigured patients. Key words:Face transplantation, composite tissue transplantation, face allograft, facial reconstruction, outcomes and complications of face transplantation. PMID:23229268

  11. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  12. Fractional CO2 laser resurfacing of photoaged facial and non-facial skin: histologic and clinical results and side effects.

    PubMed

    Sasaki, Gordon H; Travis, Heather M; Tucker, Barbara

    2009-12-01

    CO(2) fractional ablation offers the potential for facial and non-facial skin resurfacing with minimal downtime and rapid recovery. The purpose of this study was (i) to document the average depths and density of adnexal structures in non-lasered facial and non-facial body skin; (ii) to determine injury in ex vivo human thigh skin with varying fractional laser modes; and (iii) to evaluate the clinical safety and efficacy of treatments. Histologies were obtained from non-lasered facial and non-facial skin from 121 patients and from 14 samples of excised lasered thigh skin. Seventy-one patients were evaluated after varying energy (mJ) and density settings by superficial ablation, deeper penetration, and combined treatment. Skin thickness and adnexal density in non-lasered skin exhibited variable ranges: epidermis (47-105 mum); papillary dermis (61-105 mum); reticular dermis (983-1986 mum); hair follicles (2-14/ HPF); sebaceous glands (2-23/HPF); sweat glands (2-7/HPF). Histological studies of samples from human thigh skin demonstrated that increased fluencies in the superficial, deep and combined mode resulted in predictable deeper levels of ablations and thermal injury. An increase in density settings results in total ablation of the epidermis. Clinical improvement of rhytids and pigmentations in facial and non-facial skin was proportional to increasing energy and density settings. Patient assessments and clinical gradings by the Wilcoxon's test of outcomes correlated with more aggressive settings. Prior knowledge of normal skin depths and adnexal densities, as well as ex vivo skin laser-injury profiles at varying fluencies and densities, improve the safety and efficiency of fractional CO(2) for photorejuvenation of facial and non-facial skin.

  13. Facial nerve paralysis in children

    PubMed Central

    Ciorba, Andrea; Corazzi, Virginia; Conz, Veronica; Bianchini, Chiara; Aimoni, Claudia

    2015-01-01

    Facial nerve palsy is a condition with several implications, particularly when occurring in childhood. It represents a serious clinical problem as it causes significant concerns in doctors because of its etiology, its treatment options and its outcome, as well as in little patients and their parents, because of functional and aesthetic outcomes. There are several described causes of facial nerve paralysis in children, as it can be congenital (due to delivery traumas and genetic or malformative diseases) or acquired (due to infective, inflammatory, neoplastic, traumatic or iatrogenic causes). Nonetheless, in approximately 40%-75% of the cases, the cause of unilateral facial paralysis still remains idiopathic. A careful diagnostic workout and differential diagnosis are particularly recommended in case of pediatric facial nerve palsy, in order to establish the most appropriate treatment, as the therapeutic approach differs in relation to the etiology. PMID:26677445

  14. Enhanced Facial Symmetry Assessment in Orthodontists

    PubMed Central

    Jackson, Tate H.; Clark, Kait; Mitroff, Stephen R.

    2013-01-01

    Assessing facial symmetry is an evolutionarily important process, which suggests that individual differences in this ability should exist. As existing data are inconclusive, the current study explored whether a group trained in facial symmetry assessment, orthodontists, possessed enhanced abilities. Symmetry assessment was measured using face and non-face stimuli among orthodontic residents and two control groups: university participants with no symmetry training and airport security luggage screeners, a group previously shown to possess expert visual search skills unrelated to facial symmetry. Orthodontic residents were more accurate at assessing symmetry in both upright and inverted faces compared to both control groups, but not for non-face stimuli. These differences are not likely due to motivational biases or a speed-accuracy tradeoff—orthodontic residents were slower than the university participants but not the security screeners. Understanding such individual differences in facial symmetry assessment may inform the perception of facial attractiveness. PMID:24319342

  15. Association of Frontal and Lateral Facial Attractiveness.

    PubMed

    Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F

    2018-01-01

    Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested

  16. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  17. Compensation procedures for facial asymmetries.

    PubMed

    Kozol, F

    1995-01-01

    Why would a patient complain of "fuzzy and uncomfortable" vision with a variety of glasses? Perhaps because the practitioner has failed to take facial asymmetry into account. Methods of measuring facial asymmetry and optically correcting for it are discussed.

  18. The long- and short-term variability of breathing induced tumor motion in lung and liver over the course of a radiotherapy treatment.

    PubMed

    Dhont, Jennifer; Vandemeulebroucke, Jef; Burghelea, Manuela; Poels, Kenneth; Depuydt, Tom; Van Den Begin, Robbe; Jaudet, Cyril; Collen, Christine; Engels, Benedikt; Reynders, Truus; Boussaer, Marlies; Gevaert, Thierry; De Ridder, Mark; Verellen, Dirk

    2018-02-01

    To evaluate the short and long-term variability of breathing induced tumor motion. 3D tumor motion of 19 lung and 18 liver lesions captured over the course of an SBRT treatment were evaluated and compared to the motion on 4D-CT. An implanted fiducial could be used for unambiguous motion information. Fast orthogonal fluoroscopy (FF) sequences, included in the treatment workflow, were used to evaluate motion during treatment. Several motion parameters were compared between different FF sequences from the same fraction to evaluate the intrafraction variability. To assess interfraction variability, amplitude and hysteresis were compared between fractions and with the 3D tumor motion registered by 4D-CT. Population based margins, necessary on top of the ITV to capture all motion variability, were calculated based on the motion captured during treatment. Baseline drift in the cranio-caudal (CC) or anterior-poster (AP) direction is significant (ie. >5 mm) for a large group of patients, in contrary to intrafraction amplitude and hysteresis variability. However, a correlation between intrafraction amplitude variability and mean motion amplitude was found (Pearson's correlation coefficient, r = 0.72, p < 10 -4 ). Interfraction variability in amplitude is significant for 46% of all lesions. As such, 4D-CT accurately captures the motion during treatment for some fractions but not for all. Accounting for motion variability during treatment increases the PTV margins in all directions, most significantly in CC from 5 mm to 13.7 mm for lung and 8.0 mm for liver. Both short-term and day-to-day tumor motion variability can be significant, especially for lesions moving with amplitudes above 7 mm. Abandoning passive motion management strategies in favor of more active ones is advised. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Validity of clinical outcome measures to evaluate ankle range of motion during the weight-bearing lunge test.

    PubMed

    Hall, Emily A; Docherty, Carrie L

    2017-07-01

    To determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT). Cross-sectional study. Fifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity. There was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001). This study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  20. Facial gunshot wound debridement: debridement of facial soft tissue gunshot wounds.

    PubMed

    Shvyrkov, Michael B

    2013-01-01

    Over the period 1981-1985 the author treated 1486 patients with facial gunshot wounds sustained in combat in Afghanistan. In the last quarter of 20th century, more powerful and destructive weapons such as M-16 rifles, AK-47 and Kalashnikov submachine guns, became available and a new approach to gunshot wound debridement is required. Modern surgeons have little experience in treatment of such wounds because of rare contact with similar pathology. This article is intended to explore modern wound debridement. The management of 502 isolated soft tissue injuries is presented. Existing principles recommend the sparing of damaged tissues. The author's experience was that tissue sparing lead to a high rate of complications (47.6%). Radical primary surgical debridement (RPSD) of wounds was then adopted with radical excision of necrotic non-viable wound margins containing infection to the point of active capillary bleeding and immediate primary wound closure. After radical debridement wound infection and breakdown decreased by a factor of 10. Plastic operations with local and remote soft tissue were made on 14, 7% of the wounded. Only 0.7% patients required discharge from the army due to facial muscle paralysis and/or facial skin impregnation with particles of gunpowder from mine explosions. Gunshot face wound; modern debridement. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Diagnosis and surgical outcomes of intraparotid facial nerve schwannoma showing normal facial nerve function.

    PubMed

    Lee, D W; Byeon, H K; Chung, H P; Choi, E C; Kim, S-H; Park, Y M

    2013-07-01

    The findings of intraparotid facial nerve schwannoma (FNS) using preoperative diagnostic tools, including ultrasonography (US)-guided fine needle aspiration biopsy, computed tomography (CT) scan, and magnetic resonance imaging (MRI), were analyzed to determine if there are any useful findings that might suggest the presence of a lesion. Treatment guidelines are suggested. The medical records of 15 patients who were diagnosed with an intraparotid FNS were retrospectively analyzed. US and CT scans provide clinicians with only limited information; gadolinium enhanced T1-weighted images from MRI provide more specific findings. Tumors could be removed successfully with surgical exploration, preserving facial nerve function at the same time. Gadolinium-enhanced T1-weighted MRI showed more characteristic findings for the diagnosis of intraparotid FNS. Intraparotid FNS without facial palsy can be diagnosed with MRI preoperatively, and surgical exploration is a suitable treatment modality which can remove the tumor and preserve facial nerve function. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  2. Biometrics: A Look at Facial Recognition

    DTIC Science & Technology

    a facial recognition system in the city’s Oceanfront tourist area. The system has been tested and has recently been fully implemented. Senator...Kenneth W. Stolle, the Chairman of the Virginia State Crime Commission, established a Facial Recognition Technology Sub-Committee to examine the issue of... facial recognition technology. This briefing begins by defining biometrics and discussing examples of the technology. It then explains how biometrics

  3. Facial trauma: general principles of management.

    PubMed

    Hollier, Larry H; Sharabi, Safa E; Koshy, John C; Stal, Samuel

    2010-07-01

    Facial fractures are common problems encountered by the plastic surgeon. Although ubiquitous in nature, their optimal treatment requires precise knowledge of the most recent evidence-based and technologically advanced recommendations. This article discusses a variety of contemporary issues regarding facial fractures, including physical and radiologic diagnosis, treatment pearls and caveats, and the role of various synthetic materials and plating technologies for optimal facial fracture fixation.

  4. Agency and facial emotion judgment in context.

    PubMed

    Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai

    2013-06-01

    Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.

  5. Thermographic imaging of facial and ventilatory activity during vocalization, speech and expiration (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Izdebski, Krzysztof; Jarosz, Paweł; Usydus, Ireneusz

    2017-02-01

    Ventilation, speech and singing must use facial musculature to complete these motor tasks and these tasks are fueled by the air we inhale. This motor process requires increase in the blood flow as the muscles contract and relax, therefore skin surface temperature changes are expected. Hence, we used thermography to image these effects. The system used was the thermography camera model FLIR X6580sc with a chilled detector (FLIR Systems Advanced Thermal Solutions, 27700 SW Parkway Ave Wilsonville, OR 97070, USA). To assure improved imaging, the room temperature was air-conditioned to +18° C. All images were recoded at the speed of 30 f/s. Acquired data were analyzed with FLIR Research IR Max Version 4 software and software filters. In this preliminary study a male subject was imaged from frontal and lateral views simultaneously while he performed normal resting ventilation, speech and song. The lateral image was captured in a stainless steel mirror. Results showed different levels of heat flow in the facial musculature as a function of these three tasks. Also, we were able to capture the exalted air jet directionality. The breathing jet was discharged in horizontal direction, speaking voice jet was discharged downwards while singing jet went upward. We interpreted these jet directions as representing different gas content of air expired during these different tasks, with speech having less oxygen than singing. Further studies examining gas exchange during various forms of speech and song and emotional states are warranted.

  6. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  7. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  8. Signs of essential blepharospasm: a motion-picture analysis.

    PubMed

    Coles, W H

    1977-06-01

    Motion pictures of 15 patients with essential blepharospasm were studied. Previously unrecognized signs indicated multiple cranial nerve involvement. These signs include impersistence of gaze, lid retraction, tongue thrust, head tilts, head jerks, vertical gaze spasms, and asymmetry. The sugns were also observed in a patient with bilateral blepharospasm who had a history of Bell's palsy suggesting facial nerve injury as a possible factor in blepharospasm. The presence of these signs can be explained by known neural pathways, but the site, or sites, of the lesion remains obscure. These signs may be important in assessing severity and in treatment evaluation.

  9. Amblyopia Associated with Congenital Facial Nerve Paralysis.

    PubMed

    Iwamura, Hitoshi; Kondo, Kenji; Sawamura, Hiromasa; Baba, Shintaro; Yasuhara, Kazuo; Yamasoba, Tatsuya

    2016-01-01

    The association between congenital facial paralysis and visual development has not been thoroughly studied. Of 27 pediatric cases of congenital facial paralysis, we identified 3 patients who developed amblyopia, a visual acuity decrease caused by abnormal visual development, as comorbidity. These 3 patients had facial paralysis in the periocular region and developed amblyopia on the paralyzed side. They started treatment by wearing an eye patch immediately after diagnosis and before the critical visual developmental period; all patients responded to the treatment. Our findings suggest that the incidence of amblyopia in the cases of congenital facial paralysis, particularly the paralysis in the periocular region, is higher than that in the general pediatric population. Interestingly, 2 of the 3 patients developed anisometropic amblyopia due to the hyperopia of the affected eye, implying that the periocular facial paralysis may have affected the refraction of the eye through yet unspecified mechanisms. Therefore, the physicians who manage facial paralysis should keep this pathology in mind, and when they see pediatric patients with congenital facial paralysis involving the periocular region, they should consult an ophthalmologist as soon as possible. © 2016 S. Karger AG, Basel.

  10. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  11. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  12. Surgical Approaches to Facial Nerve Deficits

    PubMed Central

    Birgfeld, Craig; Neligan, Peter

    2011-01-01

    The facial nerve is one of the most commonly injured cranial nerves. Once injured, the effects on form, function, and psyche are profound. We review the anatomy of the facial nerve from the brain stem to its terminal branches. We also discuss the physical exam findings of facial nerve injury at various levels. Finally, we describe various reconstructive options for reanimating the face and restoring both form and function. PMID:22451822

  13. Subjective and objective evaluation of frontal smile esthetics in patients with facial asymmetry-a comparative cross-sectional study.

    PubMed

    Singh, H; Maurya, R K; Kapoor, P; Sharma, P; Srivastava, D

    2017-02-01

    To analyze the relationship between subjective and objective evaluations of pre-treatment posed smiles in patients with facial asymmetry and to assess the influence of dentofacial structures involved in asymmetry on the perception of smile attractiveness. Thirty-five patients (25 males and 10 females) between 18 and 25 years of age with facial asymmetry were selected. Pre-treatment clinical photographs of posed smiles were subjectively evaluated by a panel of 20 orthodontists, 20 oral surgeons, and 20 laypersons. A customized Smile Mesh program was used for objective evaluation of the same smiles. Direct comparison among three smile groups (unattractive, slightly attractive, and attractive) for different Smile Mesh measurements was carried out using two-way anova test. Additionally, linear regression was performed to evaluate whether these measurements could predict the attractiveness of captured smiles. Patients with 'slightly attractive' smiles had a significantly greater distance between the incisal margin of the maxillary central incisor and the lower lip during smiling. The Smile Index was significantly greater in attractive smiles. However, based on the coefficients of linear regression, no objectively gathered measurement could predict smile attractiveness. Attractiveness or unattractiveness of smiles in patients with facial asymmetry could not be predicted by any measurement of Smile Mesh. The presence of facial asymmetry did not significantly influence the perception of smile esthetics. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  15. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  16. Facial Feedback Mechanisms in Autistic Spectrum Disorders

    PubMed Central

    van den Heuvel, Claudia; Smeets, Raymond C.

    2008-01-01

    Facial feedback mechanisms of adolescents with Autistic Spectrum Disorders (ASD) were investigated utilizing three studies. Facial expressions, which became activated via automatic (Studies 1 and 2) or intentional (Study 2) mimicry, or via holding a pen between the teeth (Study 3), influenced corresponding emotions for controls, while individuals with ASD remained emotionally unaffected. Thus, individuals with ASD do not experience feedback from activated facial expressions as controls do. This facial feedback-impairment enhances our understanding of the social and emotional lives of individuals with ASD. PMID:18293075

  17. Facial Morphogenesis of the Earliest Europeans

    PubMed Central

    Lacruz, Rodrigo S.; de Castro, José María Bermúdez; Martinón-Torres, María; O’Higgins, Paul; Paine, Michael L.; Carbonell, Eudald; Arsuaga, Juan Luis; Bromage, Timothy G.

    2013-01-01

    The modern human face differs from that of our early ancestors in that the facial profile is relatively retracted (orthognathic). This change in facial profile is associated with a characteristic spatial distribution of bone deposition and resorption: growth remodeling. For humans, surface resorption commonly dominates on anteriorly-facing areas of the subnasal region of the maxilla and mandible during development. We mapped the distribution of facial growth remodeling activities on the 900–800 ky maxilla ATD6-69 assigned to H. antecessor, and on the 1.5 My cranium KNM-WT 15000, part of an associated skeleton assigned to African H. erectus. We show that, as in H. sapiens, H. antecessor shows bone resorption over most of the subnasal region. This pattern contrasts with that seen in KNM-WT 15000 where evidence of bone deposition, not resorption, was identified. KNM-WT 15000 is similar to Australopithecus and the extant African apes in this localized area of bone deposition. These new data point to diversity of patterns of facial growth in fossil Homo. The similarities in facial growth in H. antecessor and H. sapiens suggest that one key developmental change responsible for the characteristic facial morphology of modern humans can be traced back at least to H. antecessor. PMID:23762314

  18. Finding Makhubu: A morphological forensic facial comparison.

    PubMed

    Houlton, T M R; Steyn, M

    2018-04-01

    June 16, 1976, marks the Soweto Youth Student Uprising in South Africa. A harrowing image capturing police brutality from that day comprises of 18-year-old Mbuyisa Makhubu carrying a dying 12-year-old Hector Peterson. This circulated international press and contributed to world pressure against the apartheid government. This elevated Makhubu's profile with the national security police and forced him to flee to Botswana, then Nigeria, before disappearing in 1978. In 1988, Victor Vinnetou illegally entered Canada and was later arrested on immigration charges in 2004. Evasive of his true identity, the Canadian Border Services Agency and Makhubu's family believe Vinnetou is Makhubu, linking them by a characteristic moon-shaped birthmark on his left chest. A performed DNA test however, was inconclusive. Following the continued 40-year mystery, Eye Witness News in 2016 requested further investigation. Using a limited series of portrait images, a forensic facial comparison (FFC) was conducted utilising South African Police Service (SAPS) protocols and Facial Identification Scientific Working Group (FISWG) guidelines. The images provided, presented a substantial time-lapse and generally low resolution, while being taken from irregular angles and distances, with different subject poses, orientations and environments. This enforced the use of a morphological analysis; a primary method of FFC that develops conclusions based on subjective observations. The results were fundamentally inconclusive, but multiple similarities and valid explanations for visible differences were identified. To enhance the investigation, visual evidence of the moon-shaped birthmark and further DNA analysis is required. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Peripheral facial weakness (Bell's palsy).

    PubMed

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  20. Comparison of Direct Side-to-End and End-to-End Hypoglossal-Facial Anastomosis for Facial Nerve Repair.

    PubMed

    Samii, Madjid; Alimohamadi, Maysam; Khouzani, Reza Karimi; Rashid, Masoud Rafizadeh; Gerganov, Venelin

    2015-08-01

    The hypoglossal facial anastomosis (HFA) is the gold standard for facial reanimation in patients with severe facial nerve palsy. The major drawbacks of the classic HFA technique are lingual morbidities due to hypoglossal nerve transection. The side-to-end HFA is a modification of the classic technique with fewer tongue-related morbidities. In this study we compared the outcome of the classic end-to-end and the direct side-to-end HFA surgeries performed at our center in regards to the facial reanimation success rate and tongue-related morbidities. Twenty-six successive cases of HFA were enrolled. In 9 of them end-to-end anastomoses were performed, and 17 had direct side-to-end anastomoses. The House-Brackmann (HB) and Pitty and Tator (PT) scales were used to document surgical outcome. The hemiglossal atrophy, swallowing, and hypoglossal nerve function were assessed at follow-up. The original pathology was vestibular schwannoma in 15, meningioma in 4, brain stem glioma in 4, and other pathologies in 3. The mean interval between facial palsy and HFA was 18 months (range: 0-60). The median follow-up period was 20 months. The PT grade at follow-up was worse in patients with a longer interval from facial palsy and HFA (P value: 0.041). The lesion type was the only other factor that affected PT grade (the best results in vestibular schwannoma and the worst in the other pathologies group, P value: 0.038). The recovery period for facial tonicity was longer in patients with radiation therapy before HFA (13.5 vs. 8.5 months) and those with a longer than 2-year interval from facial palsy to HFA (13.5 vs. 8.5 months). Although no significant difference between the side-to-end and the end-to-end groups was seen in terms of facial nerve functional recovery, patients from the side-to-end group had a significantly lower rate of lingual morbidities (tongue hemiatrophy: 100% vs. 5.8%, swallowing difficulty: 55% vs. 11.7%, speech disorder 33% vs. 0%). With the side-to-end HFA

  1. Effect of neuromuscular electrical stimulation on facial muscle strength and oral function in stroke patients with facial palsy

    PubMed Central

    Choi, Jong-Bae

    2016-01-01

    [Purpose] The aim of this study was to investigate the effect of neuromuscular electrical stimulation on facial muscle strength and oral function in stroke patients with facial palsy. [Subjects and Methods] Nine subjects received the electrical stimulation and traditional dysphagia therapy. Electrical stimulation was applied to stimulate each subject’s facial muscles 30 minutes a day, 5 days a week, for 4 weeks. [Results] Subjects showed significant improvement in cheek and lip strength and oral function after the intervention. [Conclusion] This study demonstrates that electrical stimulation improves facial muscle strength and oral function in stroke patients with dysphagia. PMID:27799689

  2. Global velocity constrained cloud motion prediction for short-term solar forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping

    2016-09-01

    Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.

  3. Associations of physical strength with facial shape in an African pastoralist society, the Maasai of Northern Tanzania.

    PubMed

    Butovskaya, Marina L; Windhager, Sonja; Karelin, Dimitri; Mezentseva, Anna; Schaefer, Katrin; Fink, Bernhard

    2018-01-01

    Previous research has documented associations of physical strength and facial morphology predominantly in men of Western societies. Faces of strong men tend to be more robust, are rounder and have a prominent jawline compared with faces of weak men. Here, we investigate whether the morphometric patterns of strength-face relationships reported for members of industrialized societies can also be found in members of an African pastoralist society, the Maasai of Northern Tanzania. Handgrip strength (HGS) measures and facial photographs were collected from a sample of 185 men and 120 women of the Maasai in the Ngorongoro Conservation Area. In young-adults (20-29 years; n = 95) and mid-adults (30-50 years; n = 114), we digitized 71 somatometric landmarks and semilandmarks to capture variation in facial morphology and performed shape regressions of landmark coordinates upon HGS. Results were visualized in the form of thin-plate plate spline deformation grids and geometric morphometric morphs. Individuals with higher HGS tended to have wider faces with a lower and broader forehead, a wider distance between the medial canthi of the eyes, a wider nose, fuller lips, and a larger, squarer lower facial outline compared with weaker individuals of the same age-sex group. In mid-adult men, these associations were weaker than in the other age-sex groups. We conclude that the patterns of HGS relationships with face shape in the Maasai are similar to those reported from related investigations in samples of industrialized societies. We discuss differences between the present and related studies with regard to knowledge about the causes for age- and sex-related facial shape variation and physical strength associations.

  4. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  5. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*

    PubMed Central

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-01-01

    BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). CONCLUSIONS: Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325

  6. Strong ground motion prediction using virtual earthquakes.

    PubMed

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  7. Tongue Motion Averaging from Contour Sequences

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…

  8. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  9. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.

  10. Power estimation of martial arts movement with different physical, mood, and behavior using motion capture camera

    NASA Astrophysics Data System (ADS)

    Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir; Azraai, Nur Zaidi

    2017-07-01

    In Malay world, there is a spirit traditional ritual where they use it as healing practices or for normal life. Malay martial arts (silat) also is not exceptional where some branch of silat have spirit traditional ritual where they said can help them in combat. In this paper, we will not use any ritual, instead we will use some medicinal and environment change when they are performing. There will be 2 performers (fighter) selected, one of them have an experience in martial arts training and another performer does not have experience. Motion Capture (MOCAP) camera will help observe and analyze this move. 8 cameras have been placed in the MOCAP room 2 on each side of the wall facing toward the center of the room from every angle. This will help prevent the loss detection of a marker that been stamped on the limb of a performer. Passive marker has been used where it will reflect the infrared to the camera sensor. Infrared is generated by the source around the camera lens. A 60 kg punching bag was hung on the iron bar function as the target for the performer when throws a punch. Markers also have been stamped on the punching bag so we can detect the movement how far can it swing when hit by the performer. 2 performers will perform 2 moves each with the same position and posture. For every 2 moves, we have made the environment change without the performer notice about it. The first 2 punch with normal environment, second part we have played a positive music to change the performer's mood and third part we have put a medicine (cream/oil) on the skin of the performer. This medicine will make the skin feel a little bit hot. This process repeated to another performer with no experience. The position of this marker analyzed by the Cortex Motion Analysis software where from this data, we can estimate the kinetics and kinematics of the performer. It shows that the increase of kinetics for every part because of the change in the environment, and different result for the 2

  11. Hereditary family signature of facial expression

    PubMed Central

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-01-01

    Although facial expressions of emotion are universal, individual differences create a facial expression “signature” for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis “in-out family test,” a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis “the classification test,” in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism. PMID:17043232

  12. Facial nerve mapping and monitoring in lymphatic malformation surgery.

    PubMed

    Chiara, Jospeh; Kinney, Greg; Slimp, Jefferson; Lee, Gi Soo; Oliaei, Sepehr; Perkins, Jonathan A

    2009-10-01

    Establish the efficacy of preoperative facial nerve mapping and continuous intraoperative EMG monitoring in protecting the facial nerve during resection of cervicofacial lymphatic malformations. Retrospective study in which patients were clinically followed for at least 6 months postoperatively, and long-term outcome was evaluated. Patient demographics, lesion characteristics (i.e., size, stage, location) were recorded. Operative notes revealed surgical techniques, findings, and complications. Preoperative, short-/long-term postoperative facial nerve function was standardized using the House-Brackmann Classification. Mapping was done prior to incision by percutaneously stimulating the facial nerve and its branches and recording the motor responses. Intraoperative monitoring and mapping were accomplished using a four-channel, free-running EMG. Neurophysiologists continuously monitored EMG responses and blindly analyzed intraoperative findings and final EMG interpretations for abnormalities. Seven patients collectively underwent 8 lymphatic malformation surgeries. Median age was 30 months (2-105 months). Lymphatic malformation diagnosis was recorded in 6/8 surgeries. Facial nerve function was House-Brackmann grade I in 8/8 cases preoperatively. Facial nerve was abnormally elongated in 1/8 cases. EMG monitoring recorded abnormal activity in 4/8 cases--two suggesting facial nerve irritation, and two with possible facial nerve damage. Transient or long-term facial nerve paresis occurred in 1/8 cases (House-Brackmann grade II). Preoperative facial nerve mapping combined with continuous intraoperative EMG and mapping is a successful method of identifying the facial nerve course and protecting it from injury during resection of cervicofacial lymphatic malformations involving the facial nerve.

  13. Influence of facial convexity on facial attractiveness in Japanese.

    PubMed

    Ioi, H; Nakata, S; Nakasima, A; Counts, Al

    2007-11-01

    The purpose of this study was to assess and determine the range of the top three most-favored facial profiles for each sex from a series of varying facial convexity, and to evaluate the clinically acceptable facial profiles for Japanese adults. Questionnaire-based study. Silhouettes of average male and female profiles were constructed from the profiles of 30 Japanese males and females with normal occlusions. Chin positions were protruded or retruded by 2 degrees , 4 degrees , 6 degrees , 8 degrees and 10 degrees , respectively, from the average profile. Forty-one orthodontists and 50 dental students were asked to select the three most-favored profiles for each sex, and they were also asked to indicate whether they would seek surgical orthodontic treatment if that image represented their own profile. For males, both the orthodontists and dental students chose the average profile as the most-favored profile. For females, both the orthodontists and dental students chose a slightly more retruded chin position as the most-favored profile. Japanese raters tended to choose class II profiles as more acceptable profiles than class III profiles for both males and females. These findings suggest that Japanese patients with class III profiles tend to seek surgical orthodontic treatment more often.

  14. Facial paralysis due to an occult parotid abscess.

    PubMed

    Orhan, Kadir Serkan; Demirel, Tayfun; Kocasoy-Orhan, Elif; Yenigül, Kubilay

    2008-01-01

    Facial paralysis associated with benign diseases of the parotid gland is very rare. It has been reported in approximately 16 cases of acute suppurative parotitis or parotid abscess. We presented a 45-year-old woman who developed facial paralysis secondary to an occult parotid abscess. Initially, there was no facial paralysis and the signs and symptoms were suggestive of acute parotitis, for which medical treatment was initiated. Three days later, left-sided facial palsy of HB (House-Brackmann) grade 5 developed. Ultrasonography revealed a pretragal, hypoechoic mass, 10x8 mm in size, causing inflammation in the surrounding tissue. Fine needle aspiration biopsy obtained from the mass revealed polymorphonuclear leukocytes and lymphocytes. No malignant cells were observed. The lesion was diagnosed as an occult parotid abscess. After a week, the mass disappeared and facial paralysis improved to HB grade 4. At the end of the first month, facial paralysis improved to HB grade 1. At three months, facial nerve function was nearly normal.

  15. Electrical and transcranial magnetic stimulation of the facial nerve: diagnostic relevance in acute isolated facial nerve palsy.

    PubMed

    Happe, Svenja; Bunten, Sabine

    2012-01-01

    Unilateral facial weakness is common. Transcranial magnetic stimulation (TMS) allows identification of a conduction failure at the level of the canalicular portion of the facial nerve and may help to confirm the diagnosis. We retrospectively analyzed 216 patients with the diagnosis of peripheral facial palsy. The electrophysiological investigations included the blink reflex, preauricular electrical stimulation and the response to TMS at the labyrinthine part of the canalicular proportion of the facial nerve within 3 days after symptom onset. A similar reduction or loss of the TMS amplitude (p < 0.005) of the affected side was seen in each patient group. Of the 216 patients (107 female, mean age 49.7 ± 18.0 years), 193 were diagnosed with Bell's palsy. Test results of the remaining patients led to the diagnosis of infectious [including herpes simplex, varicella zoster infection and borreliosis (n = 13)] and noninfectious [including diabetes and neoplasma (n = 10)] etiology. A conduction block in TMS supports the diagnosis of peripheral facial palsy without being specific for Bell's palsy. These data shed light on the TMS-based diagnosis of peripheral facial palsy, an ability to localize the site of lesion within the Fallopian channel regardless of the underlying pathology. Copyright © 2012 S. Karger AG, Basel.

  16. The neurosurgical treatment of neuropathic facial pain.

    PubMed

    Brown, Jeffrey A

    2014-04-01

    This article reviews the definition, etiology and evaluation, and medical and neurosurgical treatment of neuropathic facial pain. A neuropathic origin for facial pain should be considered when evaluating a patient for rhinologic surgery because of complaints of facial pain. Neuropathic facial pain is caused by vascular compression of the trigeminal nerve in the prepontine cistern and is characterized by an intermittent prickling or stabbing component or a constant burning, searing pain. Medical treatment consists of anticonvulsant medication. Neurosurgical treatment may require microvascular decompression of the trigeminal nerve. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Small vestibular schwannomas presenting with facial nerve palsy.

    PubMed

    Espahbodi, Mana; Carlson, Matthew L; Fang, Te-Yung; Thompson, Reid C; Haynes, David S

    2014-06-01

    To describe the surgical management and convalescence of two patients presenting with severe facial nerve weakness associated with small intracanalicular vestibular schwannomas (VS). Retrospective review. Two adult female patients presenting with audiovestibular symptoms and subacute facial nerve paralysis (House-Brackmann Grade IV and V). In both cases, post-contrast T1-weighted magnetic resonance imaging revealed an enhancing lesion within the internal auditory canal without lateral extension beyond the fundus. Translabyrinthine exploration demonstrated vestibular nerve origin of tumor, extrinsic to the facial nerve, and frozen section pathology confirmed schwannoma. Gross total tumor resection with VIIth cranial nerve preservation and decompression of the labyrinthine segment of the facial nerve was performed. Both patients recovered full motor function between 6 and 8 months after surgery. Although rare, small VS may cause severe facial neuropathy, mimicking the presentation of facial nerve schwannomas and other less common pathologies. In the absence of labyrinthine extension on MRI, surgical exploration is the only reliable means of establishing a diagnosis. In the case of confirmed VS, early gross total resection with facial nerve preservation and labyrinthine segment decompression may afford full motor recovery-an outcome that cannot be achieved with facial nerve grafting.

  18. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  19. Facial recognition in children after perinatal stroke.

    PubMed

    Ballantyne, A O; Trauner, D A

    1999-04-01

    To examine the effects of prenatal or perinatal stroke on the facial recognition skills of children and young adults. It was hypothesized that the nature and extent of facial recognition deficits seen in patients with early-onset lesions would be different from that seen in adults with later-onset neurologic impairment. Numerous studies with normal and neurologically impaired adults have found a right-hemisphere superiority for facial recognition. In contrast, little is known about facial recognition in children after early focal brain damage. Forty subjects had single, unilateral brain lesions from pre- or perinatal strokes (20 had left-hemisphere damage, and 20 had right-hemisphere damage), and 40 subjects were controls who were individually matched to the lesion subjects on the basis of age, sex, and socioeconomic status. Each subject was given the Short-Form of Benton's Test of Facial Recognition. Data were analyzed using the Wilcoxon matched-pairs signed-rank test and multiple regression. The lesion subjects performed significantly more poorly than did matched controls. There was no clear-cut lateralization effect, with the left-hemisphere group performing significantly more poorly than matched controls and the right-hemisphere group showing a trend toward poorer performance. Parietal lobe involvement, regardless of lesion side, adversely affected facial recognition performance in the lesion group. Results could not be accounted for by IQ differences between lesion and control groups, nor was lesion severity systematically related to facial recognition performance. Pre- or perinatal unilateral brain damage results in a subtle disturbance in facial recognition ability, independent of the side of the lesion. Parietal lobe involvement, in particular, has an adverse effect on facial recognition skills. These findings suggest that the parietal lobes may be involved in the acquisition of facial recognition ability from a very early point in brain development, but

  20. Facial palsy after dental procedures - Is viral reactivation responsible?

    PubMed

    Gaudin, Robert A; Remenschneider, Aaron K; Phillips, Katie; Knipfer, Christian; Smeets, Ralf; Heiland, Max; Hadlock, Tessa A

    2017-01-01

    Herpes labialis viral reactivation has been reported following dental procedures, but the incidence, characteristics and outcomes of delayed peripheral facial nerve palsy following dental work is poorly understood. Herein we describe the unique features of delayed facial paresis following dental procedures. An institutional retrospective review was performed to identify patients diagnosed with delayed facial nerve palsy within 30 days of dental manipulation. Demographics, prodromal signs and symptoms, initial medical treatment and outcomes were assessed. Of 2471 patients with facial palsy, 16 (0.7%) had delayed facial paresis following ipsilateral dental procedures. Average age at presentation was 44 yrs and 56% (9/16) were female. Clinical evaluation was consistent with Bell's palsy in 14 (88%) and Ramsay-Hunt syndrome in 2 patients (12%). Patients developed facial paresis an average of 3.9 days after the dental procedure, with all individuals developing a flaccid paralysis (House Brackmann (HB) grade VI) during the acute stage. 50% of patients developed persistent facial palsy in the form of non-flaccid facial paralysis (HBIII-IV). Facial palsy, like herpes labialis, can occur in the days following dental procedures and may also be related to viral reactivation. In this small cohort, long-term facial outcomes appear worse than for spontaneous Bell's palsy. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.