ERIC Educational Resources Information Center
Al-Balushi, Sulaiman M.; Coll, Richard Kevin
2013-01-01
The current study compared different learners' static and dynamic mental images of unseen scientific species and processes in relation to their spatial ability. Learners were classified into verbal, visual and schematic. Dynamic images were classified into: appearing/disappearing, linear-movement, and rotation. Two types of scientific entities and…
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
Sciammarella, Maria; Shrestha, Uttam M; Seo, Youngho; Gullberg, Grant T; Botvinick, Elias H
2017-08-03
SPECT myocardial perfusion imaging (MPI) is a clinical mainstay that is typically performed with static imaging protocols and visually or semi-quantitatively assessed for perfusion defects based upon the relative intensity of myocardial regions. Dynamic cardiac SPECT presents a new imaging technique based on time-varying information of radiotracer distribution, which permits the evaluation of regional myocardial blood flow (MBF) and coronary flow reserve (CFR). In this work, a preliminary feasibility study was conducted in a small patient sample designed to implement a unique combined static-dynamic single-dose one-day visit imaging protocol to compare quantitative dynamic SPECT with static conventional SPECT for improving the diagnosis of coronary artery disease (CAD). Fifteen patients (11 males, four females, mean age 71 ± 9 years) were enrolled for a combined dynamic and static SPECT (Infinia Hawkeye 4, GE Healthcare) imaging protocol with a single dose of 99m Tc-tetrofosmin administered at rest and a single dose administered at stress in a one-day visit. Out of 15 patients, eleven had selective coronary angiography (SCA), 8 within 6 months and the rest within 24 months of SPECT imaging, without intervening symptoms or interventions. The extent and severity of perfusion defects in each myocardial region was graded visually. Dynamically acquired data were also used to estimate the MBF and CFR. Both visually graded images and estimated CFR were tested against SCA as a reference to evaluate the validity of the methods. Overall, conventional static SPECT was normal in ten patients and abnormal in five patients, dynamic SPECT was normal in 12 patients and abnormal in three patients, and CFR from dynamic SPECT was normal in nine patients and abnormal in six patients. Among those 11 patients with SCA, conventional SPECT was normal in 5, 3 with documented CAD on SCA with an overall accuracy of 64%, sensitivity of 40% and specificity of 83%. Dynamic SPECT image analysis also produced a similar accuracy, sensitivity, and specificity. CFR was normal in 6, each with CAD on SCA with an overall accuracy of 91%, sensitivity of 80%, and specificity of 100%. The mean CFR was significantly lower for SCA detected abnormal than for normal patients (3.86±1.06 vs 1.94±0. 0.67, P < 0.001). The visually assessed image findings in static and dynamic SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion based on SCA. The CFR measured with dynamic SPECT is fully objective, with better sensitivity and specificity, available only with the data generated from the dynamic SPECT method.
Coventry, Kenny R; Christophel, Thomas B; Fehr, Thorsten; Valdés-Conroy, Berenice; Herrmann, Manfred
2013-08-01
When looking at static visual images, people often exhibit mental animation, anticipating visual events that have not yet happened. But what determines when mental animation occurs? Measuring mental animation using localized brain function (visual motion processing in the middle temporal and middle superior temporal areas, MT+), we demonstrated that animating static pictures of objects is dependent both on the functionally relevant spatial arrangement that objects have with one another (e.g., a bottle above a glass vs. a glass above a bottle) and on the linguistic judgment to be made about those objects (e.g., "Is the bottle above the glass?" vs. "Is the bottle bigger than the glass?"). Furthermore, we showed that mental animation is driven by functional relations and language separately in the right hemisphere of the brain but conjointly in the left hemisphere. Mental animation is not a unitary construct; the predictions humans make about the visual world are driven flexibly, with hemispheric asymmetry in the routes to MT+ activation.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
Visual pattern image sequence coding
NASA Technical Reports Server (NTRS)
Silsbee, Peter; Bovik, Alan C.; Chen, Dapang
1990-01-01
The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.
ERIC Educational Resources Information Center
Price, Norman T.
2013-01-01
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active…
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Visual processing of moving and static self body-parts.
Frassinetti, Francesca; Pavani, Francesco; Zamagni, Elisa; Fusaroli, Giulia; Vescovi, Massimo; Benassi, Mariagrazia; Avanzi, Stefano; Farnè, Alessandro
2009-07-01
Humans' ability to recognize static images of self body-parts can be lost following a lesion of the right hemisphere [Frassinetti, F., Maini, M., Romualdi, S., Galante, E., & Avanzi, S. (2008). Is it mine? Hemispheric asymmetries in corporeal self-recognition. Journal of Cognitive Neuroscience, 20, 1507-1516]. Here we investigated whether the visual information provided by the movement of self body-parts may be separately processed by right brain-damaged (RBD) patients and constitute a valuable cue to reduce their deficit in self body-parts processing. To pursue these aims, neurological healthy subjects and RBD patients were submitted to a matching-task of a pair of subsequent visual stimuli, in two conditions. In the dynamic condition, participants were shown movies of moving body-parts (hand, foot, arm and leg); in the static condition, participants were shown still images of the same body-parts. In each condition, on half of the trials at least one stimulus in the pair was from the participant's own body ('Self' condition), whereas on the remaining half of the trials both stimuli were from another person ('Other' condition). Results showed that in healthy participants the self-advantage was present when processing both static and dynamic body-parts, but it was more important in the latter condition. In RBD patients, however, the self-advantage was absent in the static, but present in the dynamic body-parts condition. These findings suggest that visual information from self body-parts in motion may be processed independently in patients with impaired static self-processing, thus pointing to a modular organization of the mechanisms responsible for the self/other distinction.
Visualizing Clonal Evolution in Cancer.
Krzywinski, Martin
2016-06-02
Rapid and inexpensive single-cell sequencing is driving new visualizations of cancer instability and evolution. Krzywinski discusses how to present clone evolution plots in order to visualize temporal, phylogenetic, and spatial aspects of a tumor in a single static image. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Ergonomics for Online Searching.
ERIC Educational Resources Information Center
Wright, Carol; Friend, Linda
1992-01-01
Describes factors to be considered in the design of ergonomically correct workstations for online searchers. Topics discussed include visual factors, including lighting; acoustical factors; radiation and visual display terminals (VDTs); screen image characteristics; static electricity; hardware and equipment; workstation configuration; chairs;…
Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R
2011-01-01
The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.
Determination of Visual Figure and Ground in Dynamically Deforming Shapes
ERIC Educational Resources Information Center
Barenholtz, Elan; Feldman, Jacob
2006-01-01
Figure/ground assignment--determining which part of the visual image is foreground and which background--is a critical step in early visual analysis, upon which much later processing depends. Previous research on the assignment of figure and ground to opposing sides of a contour has almost exclusively involved static geometric factors--such as…
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
Reliability and validity of the Microsoft Kinect for evaluating static foot posture
2013-01-01
Background The evaluation of foot posture in a clinical setting is useful to screen for potential injury, however disagreement remains as to which method has the greatest clinical utility. An inexpensive and widely available imaging system, the Microsoft Kinect™, may possess the characteristics to objectively evaluate static foot posture in a clinical setting with high accuracy. The aim of this study was to assess the intra-rater reliability and validity of this system for assessing static foot posture. Methods Three measures were used to assess static foot posture; traditional visual observation using the Foot Posture Index (FPI), a 3D motion analysis (3DMA) system and software designed to collect and analyse image and depth data from the Kinect. Spearman’s rho was used to assess intra-rater reliability and concurrent validity of the Kinect to evaluate foot posture, and a linear regression was used to examine the ability of the Kinect to predict total visual FPI score. Results The Kinect demonstrated moderate to good intra-rater reliability for four FPI items of foot posture (ρ = 0.62 to 0.78) and moderate to good correlations with the 3DMA system for four items of foot posture (ρ = 0.51 to 0.85). In contrast, intra-rater reliability of visual FPI items was poor to moderate (ρ = 0.17 to 0.63), and correlations with the Kinect and 3DMA systems were poor (absolute ρ = 0.01 to 0.44). Kinect FPI items with moderate to good reliability predicted 61% of the variance in total visual FPI score. Conclusions The majority of the foot posture items derived using the Kinect were more reliable than the traditional visual assessment of FPI, and were valid when compared to a 3DMA system. Individual foot posture items recorded using the Kinect were also shown to predict a moderate degree of variance in the total visual FPI score. Combined, these results support the future potential of the Kinect to accurately evaluate static foot posture in a clinical setting. PMID:23566934
Grasp-specific motor resonance is influenced by the visibility of the observed actor.
Bunday, Karen L; Lemon, Roger N; Kilner, James M; Davare, Marco; Orban, Guy A
2016-11-01
Motor resonance is the modulation of M1 corticospinal excitability induced by observation of others' actions. Recent brain imaging studies have revealed that viewing videos of grasping actions led to a differential activation of the ventral premotor cortex depending on whether the entire person is viewed versus only their disembodied hand. Here we used transcranial magnetic stimulation (TMS) to examine motor evoked potentials (MEPs) in the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) during observation of videos or static images in which a whole person or merely the hand was seen reaching and grasping a peanut (precision grip) or an apple (whole hand grasp). Participants were presented with six visual conditions in which visual stimuli (video vs static image), view (whole person vs hand) and grasp (precision grip vs whole hand grasp) were varied in a 2 × 2 × 2 factorial design. Observing videos, but not static images, of a hand grasping different objects resulted in a grasp-specific interaction, such that FDI and ADM MEPs were differentially modulated depending on the type of grasp being observed (precision grip vs whole hand grasp). This interaction was present when observing the hand acting, but not when observing the whole person acting. Additional experiments revealed that these results were unlikely to be due to the relative size of the hand being observed. Our results suggest that observation of videos rather than static images is critical for motor resonance. Importantly, observing the whole person performing the action abolished the grasp-specific effect, which could be due to a variety of PMv inputs converging on M1. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wöllner, Clemens; Deconinck, Frederik J A
2013-05-01
Gender recognition in point-light displays was investigated with regard to body morphology cues and motion cues of human motion performed with different levels of technical skill. Gestures of male and female orchestral conductors were recorded with a motion capture system while they conducted excerpts from a Mendelssohn string symphony to musicians. Point-light displays of conductors were presented to observers under the following conditions: visual-only, auditory-only, audiovisual, and two non-conducting conditions (walking and static images). Observers distinguished between male and female conductors in gait and static images, but not in visual-only and auditory-only conducting conditions. Across all conductors, gender recognition for audiovisual stimuli was better than chance, yet significantly less reliable than for gait. Separate analyses for two groups of conductors indicated an expertise effect in that novice conductors' gender was perceived above chance level for visual-only and audiovisual conducting, while skilled conducting gestures of experts did not afford gender-specific cues. In these conditions, participants may have ignored the body morphology cues that led to correct judgments for static images. Results point to a response bias such that conductors were more often judged to be male. Thus judgment accuracy depended both on the conductors' level of expertise as well as on the observers' concepts, suggesting that perceivable differences between men and women may diminish for highly trained movements of experienced individuals. Copyright © 2013 Elsevier B.V. All rights reserved.
Dynamic Imaging of Mouse Embryos and Cardiodynamics in Static Culture.
Lopez, Andrew L; Larina, Irina V
2018-01-01
The heart is a dynamic organ that quickly undergoes morphological and mechanical changes through early embryonic development. Characterizing these early moments is important for our understanding of proper embryonic development and the treatment of heart disease. Traditionally, tomographic imaging modalities and fluorescence-based microscopy are excellent approaches to visualize structural features and gene expression patterns, respectively, and connect aberrant gene programs to pathological phenotypes. However, these approaches usually require static samples or fluorescent markers, which can limit how much information we can derive from the dynamic and mechanical changes that regulate heart development. Optical coherence tomography (OCT) is unique in this circumstance because it allows for the acquisition of three-dimensional structural and four-dimensional (3D + time) functional images of living mouse embryos without fixation or contrast reagents. In this chapter, we focus on how OCT can visualize heart morphology at different stages of development and provide cardiodynamic information to reveal mechanical properties of the developing heart.
NASA Astrophysics Data System (ADS)
Mešić, Vanes; Hajder, Erna; Neumann, Knut; Erceg, Nataša
2016-06-01
Research has shown that students have tremendous difficulties developing a qualitative understanding of wave optics, at all educational levels. In this study, we investigate how three different approaches to visualizing light waves affect students' understanding of wave optics. In the first, the conventional, approach light waves are represented by sinusoidal curves. The second teaching approach includes representing light waves by a series of static images, showing the oscillating electric field vectors at characteristic, subsequent instants of time. Within the third approach phasors are used for visualizing light waves. A total of N =85 secondary school students were randomly assigned to one of the three teaching approaches, each of which lasted a period of four class hours. Students who learned with phasors and students who learned from the series of static images outperformed the students learning according to the conventional approach, i.e., they showed a much better understanding of basic wave optics, as measured by a conceptual survey administered to the students one week after the treatment. Our results suggest that visualizing light waves with phasors or oscillating electric field vectors is a promising approach to developing a deeper understanding of wave optics for students enrolled in conceptual level physics courses.
Distal airways in humans: dynamic hyperpolarized 3He MR imaging--feasibility
NASA Technical Reports Server (NTRS)
Tooker, Angela C.; Hong, Kwan Soo; McKinstry, Erin L.; Costello, Philip; Jolesz, Ferenc A.; Albert, Mitchell S.
2003-01-01
Dynamic hyperpolarized helium 3 (3He) magnetic resonance (MR) imaging of the human airways is achieved by using a fast gradient-echo pulse sequence during inhalation. The resulting dynamic images show differential contrast enhancement of both distal airways and the lung periphery, unlike static hyperpolarized 3He MR images on which only the lung periphery is seen. With this technique, up to seventh-generation airway branching can be visualized. Copyright RSNA, 2003.
Correlative visualization techniques for multidimensional data
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Goettsche, Craig
1989-01-01
Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system.
A cascade model of information processing and encoding for retinal prosthesis.
Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian
2016-04-01
Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
Speckle dynamics under ergodicity breaking
NASA Astrophysics Data System (ADS)
Sdobnov, Anton; Bykov, Alexander; Molodij, Guillaume; Kalchenko, Vyacheslav; Jarvinen, Topias; Popov, Alexey; Kordas, Krisztian; Meglinski, Igor
2018-04-01
Laser speckle contrast imaging (LSCI) is a well-known and versatile approach for the non-invasive visualization of flows and microcirculation localized in turbid scattering media, including biological tissues. In most conventional implementations of LSCI the ergodic regime is typically assumed valid. However, most composite turbid scattering media, especially biological tissues, are non-ergodic, containing a mixture of dynamic and static centers of light scattering. In the current study, we examined the speckle contrast in different dynamic conditions with the aim of assessing limitations in the quantitative interpretation of speckle contrast images. Based on a simple phenomenological approach, we introduced a coefficient of speckle dynamics to quantitatively assess the ratio of the dynamic part of a scattering medium to the static one. The introduced coefficient allows one to distinguish real changes in motion from the mere appearance of static components in the field of view. As examples of systems with static/dynamic transitions, thawing and heating of Intralipid samples were studied by the LSCI approach.
Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.
2014-01-01
Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed
Anesthesiology training using 3D imaging and virtual reality
NASA Astrophysics Data System (ADS)
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
1996-04-01
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine
Herdtweck, Christian; Wallraven, Christian
2013-01-01
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. PMID:24349073
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
[Three-dimensional reconstruction of functional brain images].
Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I
1999-08-01
We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.
Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.
2016-01-01
We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598
NASA Astrophysics Data System (ADS)
Radhakrishnan, Harsha; Srinivasan, Vivek J.
2013-08-01
The hemodynamic response to neuronal activation is a well-studied phenomenon in the brain, due to the prevalence of functional magnetic resonance imaging. The retina represents an optically accessible platform for studying lamina-specific neurovascular coupling in the central nervous system; however, due to methodological limitations, this has been challenging to date. We demonstrate techniques for the imaging of visual stimulus-evoked hyperemia in the rat inner retina using Doppler optical coherence tomography (OCT) and OCT angiography. Volumetric imaging with three-dimensional motion correction, en face flow calculation, and normalization of dynamic signal to static signal are techniques that reduce spurious changes caused by motion. We anticipate that OCT imaging of retinal functional hyperemia may yield viable biomarkers in diseases, such as diabetic retinopathy, where the neurovascular unit may be impaired.
Malware Analysis Using Visualized Image Matrices
Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202
NASA Astrophysics Data System (ADS)
Wright, R.; Pilger, E.; Flynn, L. P.; Harris, A. J.
2006-12-01
Volcanic eruptions and wildfires are natural hazards that are truly global in their geographic scope, as well as being temporally very dynamic. As such, satellite remote sensing lends itself to their effective detection and monitoring. The results of such mapping can be communicated in the form of traditional static maps. However, most hazards have strong time-dependent forcing mechanisms (in the case of biomass burning, climate) and the dynamism of these geophysical phenomena requires a suitable method for their presentation. Here, we present visualizations of the amount of thermal energy radiated by all of Earth's sub-aerially erupting volcanoes, wildfires and industrial heat sources over a seven year period. These visualizations condense the results obtained from the near-real-time analysis of over 1.2 million MODIS (Moderate Resolution Imaging Spectro-radiometer) images, acquired from NASA's Terra and Aqua platforms. In the accompanying poster we will describe a) the raw data, b) how these data can be used to derive higher-order geophysical parameters, and c) how the visualization of these derived products adds scientific value to the raw data. The visualizations reveal spatio-temporal trends in fire radiated energy (and by proxy, biomass combustion rates and carbon emissions into the atmosphere), which are indiscernible in the static data set. Most notable are differences in biomass combustion between the North American and Eurasian Boreal forests. We also give examples relating to the development of lava flow-fields at Mount Etna (Italy) and Kilauea (USA), as well as variations in heat output from Iraqi oil fields, that span the onset of the 2003 Persian Gulf War. The raw data used to generate these visualizations are routinely made available via the Internet, as portable ASCII files. They can therefore be easily integrated with image datasets, by other researchers, to create their own visualizations.
NASA Astrophysics Data System (ADS)
Price, Norman T.
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.
Advanced Prosthetic Gait Training Tool
2015-12-01
motion capture sequences was provided by MPL to CCAD and OGAL. CCAD’s work focused on imposing these sequences on the SantosTM digital human avatar ...manipulating the avatar image. These manipulations are accomplished in the context of reinforcing what is the more ideal position and relating...focus on the visual environment by asking users to manipulate a static image of the Santos avatar to represent their perception of what they observe
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter
2012-01-01
Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…
ERIC Educational Resources Information Center
Katsioloudis, Petros; Dickerson, Daniel; Jovanovic, Vukica; Jones, Mildred
2015-01-01
The benefit of using static versus dynamic visualizations is a controversial one. Few studies have explored the effectiveness of static visualizations to those of dynamic visualizations, and the current state of the literature remains somewhat unclear. During the last decade there has been a lengthy debate about the opportunities for using…
NASA Astrophysics Data System (ADS)
de Oliveira, José Martins, Jr.; Mangini, F. Salvador; Carvalho Vila, Marta Maria Duarte; ViníciusChaud, Marco
2013-05-01
This work presents an alternative and non-conventional technique for evaluatingof physic-chemical properties of pharmaceutical dosage forms, i.e. we used computed tomography (CT) technique as a nondestructive technique to visualize internal structures of pharmaceuticals dosage forms and to conduct static and dynamical studies. The studies were conducted involving static and dynamic situations through the use of tomographic images, generated by the scanner at University of Sorocaba - Uniso. We have shown that through the use of tomographic images it is possible to conduct studies of porosity, densities, analysis of morphological parameters and performing studies of dissolution. Our results are in agreement with the literature, showing that CT is a powerful tool for use in the pharmaceutical sciences.
Recording Visual Evoked Potentials and Auditory Evoked P300 at 9.4T Static Magnetic Field
Hahn, David; Boers, Frank; Shah, N. Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4T were not different from those recorded at 0T. The amplitudes of ERPs were higher at 9.4T when compared to recordings at 0T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses. PMID:23650538
Recording visual evoked potentials and auditory evoked P300 at 9.4T static magnetic field.
Arrubla, Jorge; Neuner, Irene; Hahn, David; Boers, Frank; Shah, N Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4 T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4 T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4 T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4 T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4 T were not different from those recorded at 0 T. The amplitudes of ERPs were higher at 9.4 T when compared to recordings at 0 T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses.
Trying to Move Your Unseen Static Arm Modulates Visually-Evoked Kinesthetic Illusion
Metral, Morgane; Blettery, Baptiste; Bresciani, Jean-Pierre; Luyat, Marion; Guerraz, Michel
2013-01-01
Although kinesthesia is known to largely depend on afferent inflow, recent data suggest that central signals originating from volitional control (efferent outflow) could also be involved and interact with the former to build up a coherent percept. Evidence derives from both clinical and experimental observations where vision, which is of primary importance in kinesthesia, was systematically precluded. The purpose of the present experiment was to assess the role of volitional effort in kinesthesia when visual information is available. Participants (n=20) produced isometric contraction (10-20% of maximal voluntary force) of their right arm while their left arm, which image was reflected in a mirror, either was passively moved into flexion/extension by a motorized manipulandum, or remained static. The contraction of the right arm was either congruent with or opposite to the passive displacements of the left arm. Results revealed that in most trials, kinesthetic illusions were visually driven, and their occurrence and intensity were modulated by whether volitional effort was congruent or not with visual signals. These results confirm the impact of volitional effort in kinesthesia and demonstrate for the first time that these signals interact with visual afferents to offer a coherent and unified percept. PMID:24348909
The role of visual representation in physics learning: dynamic versus static visualization
NASA Astrophysics Data System (ADS)
Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini
2017-11-01
This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p <0.05) in both experimental and control classes. The averages of students’ learning outcomes who are using dynamic visualization media are significantly higher than the class that obtains learning by using static visualization media. It can be seen from the characteristics of visual representation; each visualization provides different understanding support for the students. Dynamic visual media is more suitable for explaining material related to movement or describing a process, whereas static visual media is appropriately used for non-moving physical phenomena and requires long-term observation.
Emotional and movement-related body postures modulate visual processing
Borhani, Khatereh; Làdavas, Elisabetta; Maier, Martin E.; Avenanti, Alessio
2015-01-01
Human body postures convey useful information for understanding others’ emotions and intentions. To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials elicited by laterally presented images of bodies with static postures and implied-motion body images with neutral, fearful or happy expressions. At the early stage of visual structural encoding (N190), we found a difference in the sensitivity of the two hemispheres to observed body postures. Specifically, the right hemisphere showed a N190 modulation both for the motion content (i.e. all the observed postures implying body movements elicited greater N190 amplitudes compared with static postures) and for the emotional content (i.e. fearful postures elicited the largest N190 amplitude), while the left hemisphere showed a modulation only for the motion content. In contrast, at a later stage of perceptual representation, reflecting selective attention to salient stimuli, an increased early posterior negativity was observed for fearful stimuli in both hemispheres, suggesting an enhanced processing of motivationally relevant stimuli. The observed modulations, both at the early stage of structural encoding and at the later processing stage, suggest the existence of a specialized perceptual mechanism tuned to emotion- and action-related information conveyed by human body postures. PMID:25556213
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
ERIC Educational Resources Information Center
Bonilha, Heather Shaw; Deliyski, Dimitar D.; Whiteside, Joanna Piasecki; Gerlach, Terri Treman
2012-01-01
Purpose: To examine differences in vocal fold vibratory phase asymmetry judged from stroboscopy, high-speed videoendoscopy (HSV), and the HSV-derived playbacks of mucosal wave kymography, digital kymography, and a static medial digital kymography image of persons with hypofunctional and hyperfunctional voice disorders. Differences between the…
The Role of Visualization in Computer Science Education
ERIC Educational Resources Information Center
Fouh, Eric; Akbar, Monika; Shaffer, Clifford A.
2012-01-01
Computer science core instruction attempts to provide a detailed understanding of dynamic processes such as the working of an algorithm or the flow of information between computing entities. Such dynamic processes are not well explained by static media such as text and images, and are difficult to convey in lecture. The authors survey the history…
Using Morphed Images to Study Visual Detection of Cutaneous Melanoma Symptom Evolution
ERIC Educational Resources Information Center
Dalianis, Elizabeth A.; Critchfield, Thomas S.; Howard, Niki L.; Jordan, J. Scott; Derenne, Adam
2011-01-01
Early detection attenuates otherwise high mortality from the skin cancer melanoma, and although major melanoma symptoms are well defined, little is known about how individuals detect them. Previous research has focused on identifying static stimuli as symptomatic vs. asymptomatic, whereas under natural conditions it is "changes" in skin lesions…
Silvoniemi, Antti; Din, Mueez U; Suilamo, Sami; Shepherd, Tony; Minn, Heikki
2016-11-01
Delineation of gross tumour volume in 3D is a critical step in the radiotherapy (RT) treatment planning for oropharyngeal cancer (OPC). Static [ 18 F]-FDG PET/CT imaging has been suggested as a method to improve the reproducibility of tumour delineation, but it suffers from low specificity. We undertook this pilot study in which dynamic features in time-activity curves (TACs) of [ 18 F]-FDG PET/CT images were applied to help the discrimination of tumour from inflammation and adjacent normal tissue. Five patients with OPC underwent dynamic [ 18 F]-FDG PET/CT imaging in treatment position. Voxel-by-voxel analysis was performed to evaluate seven dynamic features developed with the knowledge of differences in glucose metabolism in different tissue types and visual inspection of TACs. The Gaussian mixture model and K-means algorithms were used to evaluate the performance of the dynamic features in discriminating tumour voxels compared to the performance of standardized uptake values obtained from static imaging. Some dynamic features showed a trend towards discrimination of different metabolic areas but lack of consistency means that clinical application is not recommended based on these results alone. Impact of inflammatory tissue remains a problem for volume delineation in RT of OPC, but a simple dynamic imaging protocol proved practicable and enabled simple data analysis techniques that show promise for complementing the information in static uptake values.
Temporal and spatial localization of prediction-error signals in the visual brain.
Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W
2017-04-01
It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
Edge Detection Based On the Characteristic of Primary Visual Cortex Cells
NASA Astrophysics Data System (ADS)
Zhu, M. M.; Xu, Y. L.; Ma, H. Q.
2018-01-01
Aiming at the problem that it is difficult to balance the accuracy of edge detection and anti-noise performance, and referring to the dynamic and static perceptions of the primary visual cortex (V1) cells, a V1 cell model is established to perform edge detection. A spatiotemporal filter is adopted to simulate the receptive field of V1 simple cells, the model V1 cell is obtained after integrating the responses of simple cells by half-wave rectification and normalization, Then the natural image edge is detected by using static perception of V1 cells. The simulation results show that, the V1 model can basically fit the biological data and has the universality of biology. What’s more, compared with other edge detection operators, the proposed model is more effective and has better robustness
Interactive displays in medical art
NASA Technical Reports Server (NTRS)
Mcconathy, Deirdre Alla; Doyle, Michael
1989-01-01
Medical illustration is a field of visual communication with a long history. Traditional medical illustrations are static, 2-D, printed images; highly realistic depictions of the gross morphology of anatomical structures. Today medicine requires the visualization of structures and processes that have never before been seen. Complex 3-D spatial relationships require interpretation from 2-D diagnostic imagery. Pictures that move in real time have become clinical and research tools for physicians. Medical illustrators are involved with the development of interactive visual displays for three different, but not discrete, functions: as educational materials, as clinical and research tools, and as data bases of standard imagery used to produce visuals. The production of interactive displays in the medical arts is examined.
Low-speed Aerodynamic Investigations of a Hybrid Wing Body Configuration
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.; Gatlin, Gregory M.; Jenkins, Luther N.; Murphy, Patrick C.; Carter, Melissa B.
2014-01-01
Two low-speed static wind tunnel tests and a water tunnel static and dynamic forced-motion test have been conducted on a hybrid wing-body (HWB) twinjet configuration. These tests, in addition to computational fluid dynamics (CFD) analysis, have provided a comprehensive dataset of the low-speed aerodynamic characteristics of this nonproprietary configuration. In addition to force and moment measurements, the tests included surface pressures, flow visualization, and off-body particle image velocimetry measurements. This paper will summarize the results of these tests and highlight the data that is available for code comparison or additional analysis.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
Yang, Guo Liang; Aziz, Aamer; Narayanaswami, Banukumar; Anand, Ananthasubramaniam; Lim, C C Tchoyoson; Nowinski, Wieslaw Lucjan
2005-01-01
A new method has been developed for multimedia enhancement of electronic teaching files created by using the standard protocols and formats offered by the Medical Imaging Resource Center (MIRC) project of the Radiological Society of North America. The typical MIRC electronic teaching file consists of static pages only; with the new method, audio and visual content may be added to the MIRC electronic teaching file so that the entire image interpretation process can be recorded for teaching purposes. With an efficient system for encoding the audiovisual record of on-screen manipulation of radiologic images, the multimedia teaching files generated are small enough to be transmitted via the Internet with acceptable resolution. Students may respond with the addition of new audio and visual content and thereby participate in a discussion about a particular case. MIRC electronic teaching files with multimedia enhancement have the potential to augment the effectiveness of diagnostic radiology teaching. RSNA, 2005.
Real-time distortion correction for visual inspection systems based on FPGA
NASA Astrophysics Data System (ADS)
Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2008-03-01
Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.
Static and dynamic views of visual cortical organization.
Casagrande, Vivien A; Xu, Xiangmin; Sáry, Gyula
2002-01-01
Without the aid of modern techniques Cajal speculated that cells in the visual cortex were connected in circuits. From Cajal's time until fairly recently, the flow of information within the cells and circuits of visual cortex has been described as progressing from input to output, from sensation to action. In this chapter we argue that a paradigm shift in our concept of the visual cortical neuron is under way. The most important change in our view concerns the neuron's functional role. Visual cortical neurons do not have static functional signatures but instead function dynamically depending on the ongoing activity of the networks to which they belong. These networks are not merely top-down or bottom-up unidirectional transmission lines, but rather represent machinery that uses recurrent information and is dynamic and highly adaptable. With the advancement of technology for analyzing the conversations of multiple neurons at many levels in the visual system and higher resolution imaging, we predict that the paradigm shift will progress to the point where neurons are no longer viewed as independent processing units but as members of subsets of networks where their role is mapped in space-time coordinates in relationship to the other neuronal members. This view moves us far from Cajal's original views of the neuron. Nevertheless, we believe that understanding the basic morphology and wiring of networks will continue to contribute to our overall understanding of the visual cortex.
Enhancing Learning from Dynamic and Static Visualizations by Means of Cueing
ERIC Educational Resources Information Center
Kuhl, Tim; Scheiter, Katharina; Gerjets, Peter
2012-01-01
The current study investigated whether learning from dynamic and two presentation formats for static visualizations can be enhanced by means of cueing. One hundred and fifty university students were randomly assigned to six conditions, resulting from a 2x3-design, with cueing (with/without) and type of visualization (dynamic, static-sequential,…
NASA Astrophysics Data System (ADS)
Yang, Quan
2001-10-01
This study, involving 154 undergraduate college students in China, was conducted to determine whether the surface structure of visual graphics affect content learning when the learner was a non-native English speaker and learning took place in a non-English speaking environment. Instruction with concrete animated graphics resulted in significantly higher achievement, when compared to instruction with concrete static, abstract static, abstract animated graphics or text only without any graphical illustrations. It was also found, unexpectedly, the text-only instruction resulted in the second best achievement, significantly higher than instruction with concrete static, abstract static, and abstract animated graphics. In addition, there was a significant interaction with treatment and test item, which indicated that treatment effects on graphic-specific items differed from those on definitional items. Additional findings indicated that relation to graphics directly or indirectly from the text that students studied had little impact on their performance in the posttests. Further, 51% of the participants indicated that they relied on some graphical images to answer the test questions and 19% relied heavily on graphics when completing the tests. In conclusion, concrete graphics when combined with animation played a significant role in enhancing ESL student performance and enabled the students to achieve the best learning outcomes as compared to abstract animated, concrete static, and abstract static graphics. This result suggested a significant innovation in the design and development of ESL curriculum in computer-based instruction, which would enable ESL students to perform better and achieve the expected outcomes in content area learning.
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Berbaum, Kevin S.; Caldwell, Robert; Schartz, Kevin M.
2012-02-01
Radiologists are reading more cases with more images, especially in CT and MRI and thus working longer hours than ever before. There have been concerns raised regarding fatigue and whether it impacts diagnostic accuracy. This study measured the impact of reader visual fatigue by assessing symptoms, visual strain via dark focus of accommodation, and diagnostic accuracy. Twenty radiologists and 20 radiology residents were given two diagnostic performance tests searching CT chest sequences for a solitary pulmonary nodule before (rested) and after (tired) a day of clinical reading. 10 cases used free search and navigation, and the other 100 cases used preset scrolling speed and duration. Subjects filled out the Swedish Occupational Fatigue Inventory (SOFI) and the oculomotor strain subscale of the Simulator Sickness Questionnaire (SSQ) before each session. Accuracy was measured using ROC techniques. Using Swensson's technique yields an ROC area = 0.86 rested vs. 0.83 tired, p (one-tailed) = 0.09. Using Swensson's LROC technique yields an area = 0.73 rested vs. 0.66 tired, p (one-tailed) = 0.09. Using Swensson's Loc Accuracy technique yields an area = 0.77 rested vs. 0.72 tired, p (one-tailed) = 0.13). Subjective measures of fatigue increased significantly from early to late reading. To date, the results support our findings with static images and detection of bone fractures. Radiologists at the end of a long work day experience greater levels of measurable visual fatigue or strain, contributing to a decrease in diagnostic accuracy. The decrease in accuracy was not as great however as with static images.
Quantitation of Fine Displacement in Echography
NASA Astrophysics Data System (ADS)
Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo
1993-05-01
A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
The Influence of Visual and Spatial Reasoning in Interpreting Simulated 3D Worlds.
ERIC Educational Resources Information Center
Lowrie, Tom
2002-01-01
Explores ways in which 6-year-old children make sense of screen-based images on the computer. Uses both static and relatively dynamic software programs in the investigation. Suggests that young children should be exposed to activities that establish explicit links between 2D and 3D objects away from the computer before attempting difficult links…
Functional divisions for visual processing in the central brain of flying Drosophila
Weir, Peter T.; Dickinson, Michael H.
2015-01-01
Although anatomy is often the first step in assigning functions to neural structures, it is not always clear whether architecturally distinct regions of the brain correspond to operational units. Whereas neuroarchitecture remains relatively static, functional connectivity may change almost instantaneously according to behavioral context. We imaged panneuronal responses to visual stimuli in a highly conserved central brain region in the fruit fly, Drosophila, during flight. In one substructure, the fan-shaped body, automated analysis revealed three layers that were unresponsive in quiescent flies but became responsive to visual stimuli when the animal was flying. The responses of these regions to a broad suite of visual stimuli suggest that they are involved in the regulation of flight heading. To identify the cell types that underlie these responses, we imaged activity in sets of genetically defined neurons with arborizations in the targeted layers. The responses of this collection during flight also segregated into three sets, confirming the existence of three layers, and they collectively accounted for the panneuronal activity. Our results provide an atlas of flight-gated visual responses in a central brain circuit. PMID:26324910
Functional divisions for visual processing in the central brain of flying Drosophila.
Weir, Peter T; Dickinson, Michael H
2015-10-06
Although anatomy is often the first step in assigning functions to neural structures, it is not always clear whether architecturally distinct regions of the brain correspond to operational units. Whereas neuroarchitecture remains relatively static, functional connectivity may change almost instantaneously according to behavioral context. We imaged panneuronal responses to visual stimuli in a highly conserved central brain region in the fruit fly, Drosophila, during flight. In one substructure, the fan-shaped body, automated analysis revealed three layers that were unresponsive in quiescent flies but became responsive to visual stimuli when the animal was flying. The responses of these regions to a broad suite of visual stimuli suggest that they are involved in the regulation of flight heading. To identify the cell types that underlie these responses, we imaged activity in sets of genetically defined neurons with arborizations in the targeted layers. The responses of this collection during flight also segregated into three sets, confirming the existence of three layers, and they collectively accounted for the panneuronal activity. Our results provide an atlas of flight-gated visual responses in a central brain circuit.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Figure-Ground Organization in Visual Cortex for Natural Scenes
2016-01-01
Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269
TH-CD-207B-03: How to Quantify Temporal Resolution in X-Ray MDCT Imaging?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budde, A; GE Healthcare Technologies, Madison, WI; Li, Y
Purpose: In modern CT scanners, a quantitative metric to assess temporal response, namely, to quantify the temporal resolution (TR), remains elusive. Rough surrogate metrics, such as half of the gantry rotation time for single source CT, a quarter of the gantry rotation time for dual source CT, or measurements of motion artifact’s size, shape, or intensity have previously been used. In this work, a rigorous framework which quantifies TR and a practical measurement method are developed. Methods: A motion phantom was simulated which consisted of a single rod that is in motion except during a static period at the temporalmore » center of the scan, termed the TR window. If the image of the motion scan has negligible motion artifacts compared to an image from a totally static scan, then the system has a TR no worse than the TR window used. By repeating this comparison with varying TR windows, the TR of the system can be accurately determined. Motion artifacts were also visually assessed and the TR was measured across varying rod motion speeds, directions, and locations. Noiseless fan beam acquisitions were simulated and images were reconstructed with a short-scan image reconstruction algorithm. Results: The size, shape, and intensity of motion artifacts varied when the rod speed, direction, or location changed. TR measured using the proposed method, however, was consistent across rod speeds, directions, and locations. Conclusion: Since motion artifacts vary depending upon the motion speed, direction, and location, they are not suitable for measuring TR. In this work, a CT system with a specified TR is defined as having the ability to produce a static image with negligible motion artifacts, no matter what motion occurs outside of a static window of width TR. This framework allows for practical measurement of temporal resolution in clinical CT imaging systems. Funding support: GE Healthcare; Conflict of Interest: Employee, GE Healthcare.« less
Fernandez, Nicolas F.; Gundersen, Gregory W.; Rahman, Adeeb; Grimes, Mark L.; Rikova, Klarisa; Hornbeck, Peter; Ma’ayan, Avi
2017-01-01
Most tools developed to visualize hierarchically clustered heatmaps generate static images. Clustergrammer is a web-based visualization tool with interactive features such as: zooming, panning, filtering, reordering, sharing, performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer can be used to generate shareable interactive visualizations by uploading a data table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The Clustergrammer core libraries can also be used as a toolkit by developers to generate visualizations within their own applications. Clustergrammer is demonstrated using gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational modification data collected from lung cancer cells lines by a mass spectrometry approach, and original cytometry by time of flight (CyTOF) single-cell proteomics data from blood. Clustergrammer enables producing interactive web based visualizations for the analysis of diverse biological data. PMID:28994825
Photovoltaic restoration of sight with high visual acuity
Lorach, Henri; Goetz, Georges; Smith, Richard; Lei, Xin; Mandel, Yossi; Kamins, Theodore; Mathieson, Keith; Huie, Philip; Harris, James; Sher, Alexander; Palanker, Daniel
2015-01-01
Patients with retinal degeneration lose sight due to gradual demise of photoreceptors. Electrical stimulation of the surviving retinal neurons provides an alternative route for delivery of visual information. We demonstrate that subretinal arrays with 70 μm photovoltaic pixels provide highly localized stimulation, with electrical and visual receptive fields of comparable sizes in rat retinal ganglion cells. Similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies, adaptation to static images and non-linear spatial summation. In rats with retinal degeneration, these photovoltaic arrays provide spatial resolution of 64 ± 11 μm, corresponding to half of the normal visual acuity in pigmented rats. Ease of implantation of these wireless and modular arrays, combined with their high resolution opens the door to functional restoration of sight. PMID:25915832
High-Speed Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Ando, Toshio; Uchihashi, Takayuki; Kodera, Noriyuki
2012-08-01
The technology of high-speed atomic force microscopy (HS-AFM) has reached maturity. HS-AFM enables us to directly visualize the structure and dynamics of biological molecules in physiological solutions at subsecond to sub-100 ms temporal resolution. By this microscopy, dynamically acting molecules such as myosin V walking on an actin filament and bacteriorhodopsin in response to light are successfully visualized. High-resolution molecular movies reveal the dynamic behavior of molecules in action in great detail. Inferences no longer have to be made from static snapshots of molecular structures and from the dynamic behavior of optical markers attached to biomolecules. In this review, we first describe theoretical considerations for the highest possible imaging rate, then summarize techniques involved in HS-AFM and highlight recent imaging studies. Finally, we briefly discuss future challenges to explore.
Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters
Bajaj, Chandrajit
2009-01-01
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231
NASA Astrophysics Data System (ADS)
Li, Jianwei D.; Malone, Joseph D.; El-Haddad, Mohamed T.; Arquitola, Amber M.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2017-02-01
Surgical interventions for ocular diseases involve manipulations of semi-transparent structures in the eye, but limited visualization of these tissue layers remains a critical barrier to developing novel surgical techniques and improving clinical outcomes. We addressed limitations in image-guided ophthalmic microsurgery by using microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (iSS-SESLO-OCT). We previously demonstrated in vivo human ophthalmic imaging using SS-SESLO-OCT, which enabled simultaneous acquisition of en face SESLO images with every OCT cross-section. Here, we integrated our new 400 kHz iSS-SESLO-OCT, which used a buffered Axsun 1060 nm swept-source, with a surgical microscope and TrueVision stereoscopic viewing system to provide image-based feedback. In vivo human imaging performance was demonstrated on a healthy volunteer, and simulated surgical maneuvers were performed in ex vivo porcine eyes. Denselysampled static volumes and volumes subsampled at 10 volumes-per-second were used to visualize tissue deformations and surgical dynamics during corneal sweeps, compressions, and dissections, and retinal sweeps, compressions, and elevations. En face SESLO images enabled orientation and co-registration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. TrueVision heads-up display allowed for side-by-side viewing of the surgical field with SESLO and OCT previews for real-time feedback, and we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Integration of these complementary imaging modalities may benefit surgical outcomes by enabling real-time intraoperative visualization of surgical plans, instrument positions, tissue deformations, and image-based surrogate biomarkers correlated with completion of surgical goals.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei
2015-10-01
Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills. Copyright © 2014 Elsevier B.V. All rights reserved.
The Research of Spectral Reconstruction for Large Aperture Static Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Lv, H.; Lee, Y.; Liu, R.; Fan, C.; Huang, Y.
2018-04-01
Imaging spectrometer obtains or indirectly obtains the spectral information of the ground surface feature while obtaining the target image, which makes the imaging spectroscopy has a prominent advantage in fine characterization of terrain features, and is of great significance for the study of geoscience and other related disciplines. Since the interference data obtained by interferometric imaging spectrometer is intermediate data, which must be reconstructed to achieve the high quality spectral data and finally used by users. The difficulty to restrict the application of interferometric imaging spectroscopy is to reconstruct the spectrum accurately. Based on the original image acquired by Large Aperture Static Imaging Spectrometer as the input, this experiment selected the pixel that is identified as crop by artificial recognition, extract and preprocess the interferogram to recovery the corresponding spectrum of this pixel. The result shows that the restructured spectrum formed a small crest near the wavelength of 0.55 μm with obvious troughs on both sides. The relative reflection intensity of the restructured spectrum rises abruptly at the wavelength around 0.7 μm, forming a steep slope. All these characteristics are similar with the spectral reflection curve of healthy green plants. It can be concluded that the experimental result is consistent with the visual interpretation results, thus validating the effectiveness of the scheme for interferometric imaging spectrum reconstruction proposed in this paper.
Hogervorst, Maarten A.; Pinkus, Alan R.
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328
Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
Detection of Visual Field Loss in Pituitary Disease: Peripheral Kinetic Versus Central Static
Rowe, Fiona J.; Cheyne, Christopher P.; García-Fiñana, Marta; Noonan, Carmel P.; Howard, Claire; Smith, Jayne; Adeoye, Joanne
2015-01-01
Abstract Visual field assessment is an important clinical evaluation for eye disease and neurological injury. We evaluated Octopus semi-automated kinetic peripheral perimetry (SKP) and Humphrey static automated central perimetry for detection of neurological visual field loss in patients with pituitary disease. We carried out a prospective cross-sectional diagnostic accuracy study comparing Humphrey central 30-2 SITA threshold programme with a screening protocol for SKP on Octopus perimetry. Humphrey 24-2 data were extracted from 30-2 results. Results were independently graded for presence/absence of field defect plus severity of defect. Fifty patients (100 eyes) were recruited (25 males and 25 females), with mean age of 52.4 years (SD = 15.7). Order of perimeter assessment (Humphrey/Octopus first) and order of eye tested (right/left first) were randomised. The 30-2 programme detected visual field loss in 85%, the 24-2 programme in 80%, and the Octopus combined kinetic/static strategy in 100% of eyes. Peripheral visual field loss was missed by central threshold assessment. Qualitative comparison of type of visual field defect demonstrated a match between Humphrey and Octopus results in 58%, with a match for severity of defect in 50%. Tests duration was 9.34 minutes (SD = 2.02) for Humphrey 30-2 versus 10.79 minutes (SD = 4.06) for Octopus perimetry. Octopus semi-automated kinetic perimetry was found to be superior to central static testing for detection of pituitary disease-related visual field loss. Where reliant on Humphrey central static perimetry, the 30-2 programme is recommended over the 24-2 programme. Where kinetic perimetry is available, this is preferable to central static programmes for increased detection of peripheral visual field loss. PMID:27928344
Goddard, Erin; Clifford, Colin W G
2013-04-22
Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.
When Art Moves the Eyes: A Behavioral and Eye-Tracking Study
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007
When art moves the eyes: a behavioral and eye-tracking study.
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.
Eibofner, Frank; Martirosian, Petros; Würslin, Christian; Graf, Hansjörg; Syha, Roland; Clasen, Stephan
2015-11-01
In interventional magnetic resonance imaging, instruments can be equipped with conducting wires for visualization by current application. The potential of sequence triggered application of transient direct currents in balanced steady-state free precession (bSSFP) imaging is demonstrated. A conductor and a modified catheter were examined in water phantoms and in an ex vivo porcine liver. The current was switched by a trigger pulse in the bSSFP sequence in an interval between radiofrequency pulse and signal acquisition. Magnitude and phase images were recorded. Regions with transient field alterations were evaluated by a postprocessing algorithm. A phase mask was computed and overlaid with the magnitude image. Transient field alterations caused continuous phase shifts, which were separated by the postprocessing algorithm from phase jumps due to persistent field alterations. The overlaid images revealed the position of the conductor. The modified catheter generated visible phase offset in all orientations toward the static magnetic field and could be unambiguously localized in the ex vivo porcine liver. The application of a sequence triggered, direct current in combination with phase imaging allows conspicuous localization of interventional devices with a bSSFP sequence.
Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.
1986-01-01
A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
Cognitive Strategies for Learning from Static and Dynamic Visuals.
ERIC Educational Resources Information Center
Lewalter, D.
2003-01-01
Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)
Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan
2014-01-01
The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.
An MR-based Model for Cardio-Respiratory Motion Compensation of Overlays in X-Ray Fluoroscopy
Fischer, Peter; Faranesh, Anthony; Pohl, Thomas; Maier, Andreas; Rogers, Toby; Ratnayaka, Kanishka; Lederman, Robert; Hornegger, Joachim
2017-01-01
In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time MR imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 mm to 2.75 mm in MR and from 3.0 mm to 1.8 mm in X-ray compared to the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos. PMID:28692969
Effects of contour enhancement on low-vision preference and visual search.
Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli
2012-09-01
To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.
NASA Astrophysics Data System (ADS)
Sowmiya, C.; Kothawala, Ali Arshad; Thittai, Arun K.
2016-04-01
During manual palpation of breast masses, the perception of its stiffness and slipperiness are the two commonly used information by the physician. In order to reliably and quantitatively obtain this information several non-invasive elastography techniques have been developed that seek to provide an image of the underlying mechanical properties, mostly stiffness-related. Very few approaches have visualized the "slip" at the lesion-background boundary that only occurs for a loosely-bonded benign lesion. It has been shown that axial-shear strain distribution provides information about underlying slip. One such feature, referred to as "fill-in" was interpreted as a surrogate of the rotation undergone by an asymmetrically-oriented-loosely bonded-benign-lesion under quasi-static compression. However, imaging and direct visualization of the rotation itself has not been addressed yet. In order to accomplish this, the quality of lateral displacement estimation needs to be improved. In this simulation study, we utilize spatial compounding approach and assess the feasibility to obtain good quality rotation elastogram. The angular axial and lateral displacement estimates were obtained at different insonification angles from a phantom containing an elliptical inclusion oriented at 45°, subjected to 1% compression from the top. A multilevel 2D-block matching algorithm was used for displacement tracking and 2D-least square compounding of angular axial and lateral displacement estimates was employed. By varying the maximum steering angle and incremental angle, the improvement in the lateral motion tracking accuracy and its effects on the quality of rotational elastogram were evaluated. Results demonstrate significantly-improved rotation elastogram using this technique.
Hatherly, Robert; Brolin, Fredrik; Oldner, Åsa; Sundin, Anders; Lundblad, Henrik; Maguire, Gerald Q; Jonsson, Cathrine; Jacobsson, Hans; Noz, Marilyn E
2014-03-01
Diagnosis of new bone growth in patients with compound tibia fractures or deformities treated using a Taylor spatial frame is difficult with conventional radiography because the frame obstructs the images and creates artifacts. The use of Na(18)F PET studies may help to eliminate this difficulty. Patients were positioned on the pallet of a clinical PET/CT scanner and made as comfortable as possible with their legs immobilized. One bed position covering the site of the fracture, including the Taylor spatial frame, was chosen for the study. A topogram was performed, as well as diagnostic and attenuation correction CT. The patients were given 2 MBq of Na(18)F per kilogram of body weight. A 45-min list-mode acquisition was performed starting at the time of injection, followed by a 5-min static acquisition 60 min after injection. The patients were examined 6 wk after the Taylor spatial frame had been applied and again at 3 mo to assess new bone growth. A list-mode reconstruction sequence of 1 × 1,800 and 1 × 2,700 s, as well as the 5-min static scan, allowed visualization of regional bone turnover. With Na(18)F PET/CT, it was possible to confirm regional bone turnover as a means of visualizing bone remodeling without the interference of artifacts from the Taylor spatial frame. Furthermore, dynamic list-mode acquisition allowed different sequences to be performed, enabling, for example, visualization of tracer transport from blood to the fracture site.
Efficacy Outcome Measures for Clinical Trials of USH2A caused by the Common c.2299delG Mutation.
Calzetti, Giacomo; Levy, Richard A; Cideciyan, Artur V; Garafalo, Alexandra V; Roman, Alejandro J; Sumaroka, Alexander; Charng, Jason; Heon, Elise; Jacobson, Samuel G
2018-06-25
To determine the change in vision and retinal structure in patients with the common c.2299delG mutation in the USH2A gene in anticipation of clinical trials of therapy. Retrospective observational case series. Eighteen patients, homozygotes or compound heterozygotes with the c.2299delG mutation in USH2A, were studied with visual acuity, kinetic perimetry, dark- and light-adapted two-color static perimetry, optical coherence tomography (OCT) and autofluorescence (AF) imaging. Serial data were available for at least half of the patients depending on the parameter analyzed. The kinetics of disease progression in this specific molecular form of USH2A differed between the measured parameters. Visual acuity could remain normal for decades. Kinetic and light-adapted static perimetry across the entire visual field had similar rates of decline that were slower than those of rod-based perimetry. Horizontal OCT scans through the macula showed that IS/OS line width had a similar rate of constriction as co-localized AF imaging and cone-based light-adapted sensitivity extent. The rate of constriction of rod-based sensitivity extent across this same region was twice as rapid as that of cones. In patients with the c.2299delG mutation in USH2A, rod photoreceptors are the cells that express disease early and more aggressively than cones. Rod-based vision measurements in central or extracentral-peripheral retinal regions warrant monitoring in order to complete a clinical trial in a timely manner. Copyright © 2018. Published by Elsevier Inc.
Static hand gesture recognition from a video
NASA Astrophysics Data System (ADS)
Rokade, Rajeshree S.; Doye, Dharmpal
2011-10-01
A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.
Magnetic resonance imaging differential diagnosis of brainstem lesions in children
Quattrocchi, Carlo Cosimo; Errante, Yuri; Rossi Espagnet, Maria Camilla; Galassi, Stefania; Della Sala, Sabino Walter; Bernardi, Bruno; Fariello, Giuseppe; Longo, Daniela
2016-01-01
Differential diagnosis of brainstem lesions, either isolated or in association with cerebellar and supra-tentorial lesions, can be challenging. Knowledge of the structural organization is crucial for the differential diagnosis and establishment of prognosis of pathologies with involvement of the brainstem. Familiarity with the location of the lesions in the brainstem is essential, especially in the pediatric population. Magnetic resonance imaging (MRI) is the most sensitive and specific imaging technique for diagnosing disorders of the posterior fossa and, particularly, the brainstem. High magnetic static field MRI allows detailed visualization of the morphology, signal intensity and metabolic content of the brainstem nuclei, together with visualization of the normal development and myelination. In this pictorial essay we review the brainstem pathology in pediatric patients and consider the MR imaging patterns that may help the radiologist to differentiate among vascular, toxico-metabolic, infective-inflammatory, degenerative and neoplastic processes. Helpful MR tips can guide the differential diagnosis: These include the location and morphology of lesions, the brainstem vascularization territories, gray and white matter distribution and tissue selective vulnerability. PMID:26834941
Exploring responses to art in adolescence: a behavioral and eye-tracking study.
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception.
Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813
Clinical approach to optic neuropathies
Behbehani, Raed
2007-01-01
Optic neuropathy is a frequent cause of vision loss encountered by ophthalmologist. The diagnosis is made on clinical grounds. The history often points to the possible etiology of the optic neuropathy. A rapid onset is typical of demyelinating, inflammatory, ischemic and traumatic causes. A gradual course points to compressive, toxic/nutritional and hereditary causes. The classic clinical signs of optic neuropathy are visual field defect, dyschromatopsia, and abnormal papillary response. There are ancillary investigations that can support the diagnosis of optic neuropathy. Visual field testing by either manual kinetic or automated static perimetry is critical in the diagnosis. Neuro-imaging of the brain and orbit is essential in many optic neuropathies including demyelinating and compressive. Newer technologies in the evaluation of optic neuropathies include multifocal visual evoked potentials and optic coherence tomography. PMID:19668477
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.
1974-01-01
The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.
Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study
NASA Astrophysics Data System (ADS)
Herman, L.; Stachoň, Z.
2016-06-01
Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.
Seeing Fluid Physics via Visual Expertise Training
NASA Astrophysics Data System (ADS)
Hertzberg, Jean; Goodman, Katherine; Curran, Tim
2016-11-01
In a course on Flow Visualization, students often expressed that their perception of fluid flows had increased, implying the acquisition of a type of visual expertise, akin to that of radiologists or dog show judges. In the first steps towards measuring this expertise, we emulated an experimental design from psychology. The study had two groups of participants: "novices" with no formal fluids education, and "experts" who had passed as least one fluid mechanics course. All participants were trained to place static images of fluid flows into two categories (laminar and turbulent). Half the participants were trained on flow images with a specific format (Von Kármán vortex streets), and the other half on a broader group. Novices' results were in line with past perceptual expertise studies, showing that it is easier to transfer learning from a broad category to a new specific format than vice versa. In contrast, experts did not have a significant difference between training conditions, suggesting the experts did not undergo the same learning process as the novices. We theorize that expert subjects were able to access their conceptual knowledge about fluids to perform this new, visual task. This finding supports new ways of understanding conceptual learning.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.
Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming
2017-10-20
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Volumetric 3D Display System with Static Screen
NASA Technical Reports Server (NTRS)
Geng, Jason
2011-01-01
Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.
Khramtsova, Ekaterina A; Stranger, Barbara E
2017-02-01
Over the last decade, genome-wide association studies (GWAS) have generated vast amounts of analysis results, requiring development of novel tools for data visualization. Quantile–quantile (QQ) plots and Manhattan plots are classical tools which have been utilized to visually summarize GWAS results and identify genetic variants significantly associated with traits of interest. However, static visualizations are limiting in the information that can be shown. Here, we present Assocplots, a Python package for viewing and exploring GWAS results not only using classic static Manhattan and QQ plots, but also through a dynamic extension which allows to interactively visualize the relationships between GWAS results from multiple cohorts or studies. The Assocplots package is open source and distributed under the MIT license via GitHub (https://github.com/khramts/assocplots) along with examples, documentation and installation instructions. ekhramts@medicine.bsd.uchicago.edu or bstranger@medicine.bsd.uchicago.edu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinert, Marian; Kratz, Marita; Jones, David B.
2014-10-15
In this paper, we present a system that allows imaging of cartilage tissue via optical coherence tomography (OCT) during controlled uniaxial unconfined compression of cylindrical osteochondral cores in vitro. We describe the system design and conduct a static and dynamic performance analysis. While reference measurements yield a full scale maximum deviation of 0.14% in displacement, force can be measured with a full scale standard deviation of 1.4%. The dynamic performance evaluation indicates a high accuracy in force controlled mode up to 25 Hz, but it also reveals a strong effect of variance of sample mechanical properties on the tracking performancemore » under displacement control. In order to counterbalance these disturbances, an adaptive feed forward approach was applied which finally resulted in an improved displacement tracking accuracy up to 3 Hz. A built-in imaging probe allows on-line monitoring of the sample via OCT while being loaded in the cultivation chamber. We show that cartilage topology and defects in the tissue can be observed and demonstrate the visualization of the compression process during static mechanical loading.« less
ERIC Educational Resources Information Center
Smorenburg, Ana R. P.; Ledebt, Annick; Deconinck, Frederik J. A.; Savelsbergh, Geert J. P.
2011-01-01
This study examined the active joint-position sense in children with Spastic Hemiparetic Cerebral Palsy (SHCP) and the effect of static visual feedback and static mirror visual feedback, of the non-moving limb, on the joint-position sense. Participants were asked to match the position of one upper limb with that of the contralateral limb. The task…
Nowomiejska, Katarzyna; Oleszczuk, Agnieszka; Zubilewicz, Anna; Krukowski, Jacek; Mańkowska, Anna; Rejdak, Robert; Zagórski, Zbigniew
2007-01-01
To compare the visual field results obtained by static perimetry, microperimetry and rabbit perimetry in patients suffering from dry age related macular degeneration (AMD). Fifteen eyes with dry AMD (hard or soft macula drusen and RPE disorders) were enrolled into the study. Static perimetry was performed using M2 macula program included in Octopus 101 instrument. Microperimetry was performed using macula program (14-2 threshold, 10dB) within 10 degrees of the central visual field. The fovea program within 4 degrees was used while performing rarebit perimetry. The mean sensitivity was significantly lower (p<0.001) during microperimetry (13.5 dB) comparing to static perimetry (26.7 dB). The mean deviation was significantly higher (p<0.001) during microperimetry (-6.32 dB) comparing to static perimetry (-3.11 dB). The fixation was unstable in 47% and eccentric in 40% while performing microperimetry. The median of the "mean hit rate" in rarebit perimetry was 90% (range 40-100%). The mean examination duration was 6.5 min. in static perimetry, 10.6 min. in microperimetry and 5,5 min. in rarebit perimetry (p<0.001). Sensitivity was 30%, 53% and 93% respectively. The visual field defects obtained by microperimetry were more pronounced than those obtained by static perimetry. Microperimetry was the most sensitive procedure although the most time-consuming. Microperimetry enables the control of the fixation position and stability, that is not possible using the remaining methods. Rarebit perimetry revealed slight reduction of the integrity of neural architecture of the retina. Microperimetry and rarebit perimetry provide more information in regard to the visual function than static perimetry, thus are the valuable method in the diagnosis of dry AMD.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Cardiac CT for myocardial ischaemia detection and characterization--comparative analysis.
Bucher, A M; De Cecco, C N; Schoepf, U J; Wang, R; Meinel, F G; Binukrishnan, S R; Spearman, J V; Vogl, T J; Ruzsics, B
2014-11-01
The assessment of patients presenting with symptoms of myocardial ischaemia remains one of the most common and challenging clinical scenarios faced by physicians. Current imaging modalities are capable of three-dimensional, functional and anatomical views of the heart and as such offer a unique contribution to understanding and managing the pathology involved. Evidence has accumulated that visual anatomical coronary evaluation does not adequately predict haemodynamic relevance and should be complemented by physiological evaluation, highlighting the importance of functional assessment. Technical advances in CT technology over the past decade have progressively moved cardiac CT imaging into the clinical workflow. In addition to anatomical evaluation, cardiac CT is capable of providing myocardial perfusion parameters. A variety of CT techniques can be used to assess the myocardial perfusion. The single energy first-pass CT and dual energy first-pass CT allow static assessment of myocardial blood pool. Dynamic cardiac CT imaging allows quantification of myocardial perfusion through time-resolved attenuation data. CT-based myocardial perfusion imaging (MPI) is showing promising diagnostic accuracy compared with the current reference modalities. The aim of this review is to present currently available myocardial perfusion techniques with a focus on CT imaging in light of recent clinical investigations. This article provides a comprehensive overview of currently available CT approaches of static and dynamic MPI and presents the results of corresponding clinical trials.
Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung
2016-09-01
Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Visualizing risks in cancer communication: A systematic review of computer-supported visual aids.
Stellamanns, Jan; Ruetters, Dana; Dahal, Keshav; Schillmoeller, Zita; Huebner, Jutta
2017-08-01
Health websites are becoming important sources for cancer information. Lay users, patients and carers seek support for critical decisions, but they are prone to common biases when quantitative information is presented. Graphical representations of risk data can facilitate comprehension, and interactive visualizations are popular. This review summarizes the evidence on computer-supported graphs that present risk data and their effects on various measures. The systematic literature search was conducted in several databases, including MEDLINE, EMBASE and CINAHL. Only studies with a controlled design were included. Relevant publications were carefully selected and critically appraised by two reviewers. Thirteen studies were included. Ten studies evaluated static graphs and three dynamic formats. Most decision scenarios were hypothetical. Static graphs could improve accuracy, comprehension, and behavioural intention. But the results were heterogeneous and inconsistent among the studies. Dynamic formats were not superior or even impaired performance compared to static formats. Static graphs show promising but inconsistent results, while research on dynamic visualizations is scarce and must be interpreted cautiously due to methodical limitations. Well-designed and context-specific static graphs can support web-based cancer risk communication in particular populations. The application of dynamic formats cannot be recommended and needs further research. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual search for facial expressions of emotions: a comparison of dynamic and static faces.
Horstmann, Gernot; Ansorge, Ulrich
2009-02-01
A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
New three-dimensional visualization system based on angular image differentiation
NASA Astrophysics Data System (ADS)
Montes, Juan D.; Campoy, Pascual
1995-03-01
This paper presents a new auto-stereoscopic system capable of reproducing static or moving 3D images by projection with horizontal parallax or with horizontal and vertical parallaxes. The working principle is based on the angular differentiation of the images which are projected onto the back side of the new patented screen. The most important features of this new system are: (1) Images can be seen by naked eye, without the use of glasses or any other aid. (2) The 3D view angle is not restricted by the angle of the optics making up the screen. (3) Fine tuning is not necessary, independently of the parallax and of the size of the 3D view angle. (4) Coherent light is not necessary neither in capturing the image nor in its reproduction, but standard cameras and projectors. (5) Since the images are projected, the size and depth of the reproduced scene is unrestricted. (6) Manufacturing cost is not excessive, due to the use of optics of large focal length, to the lack of fine tuning and to the use of the same screen several reproduction systems. (7) This technology can be used for any projection system: slides, movies, TV cannons,... A first prototype of static images has been developed and tested with a 3D view angle of 90 degree(s) and a photographic resolution over a planar screen of 900 mm, of diagonal length. Present developments have success on a dramatic size reduction of the projecting system and of its cost. Simultaneous tasks have been carried out on the development of a prototype of 3D moving images.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry
Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.
2015-01-01
Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002
Progress in high-level exploratory vision
NASA Astrophysics Data System (ADS)
Brand, Matthew
1993-08-01
We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.
Dynamic visual noise reduces confidence in short-term memory for visual information.
Kemps, Eva; Andrade, Jackie
2012-05-01
Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.
Ranging through Gabor logons-a consistent, hierarchical approach.
Chang, C; Chatterjee, S
1993-01-01
In this work, the correspondence problem in stereo vision is handled by matching two sets of dense feature vectors. Inspired by biological evidence, these feature vectors are generated by a correlation between a bank of Gabor sensors and the intensity image. The sensors consist of two-dimensional Gabor filters at various scales (spatial frequencies) and orientations, which bear close resemblance to the receptive field profiles of simple V1 cells in visual cortex. A hierarchical, stochastic relaxation method is then used to obtain the dense stereo disparities. Unlike traditional hierarchical methods for stereo, feature based hierarchical processing yields consistent disparities. To avoid false matchings due to static occlusion, a dual matching, based on the imaging geometry, is used.
Matching voice and face identity from static images.
Mavica, Lauren W; Barenholtz, Elan
2013-04-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.
Orientation selectivity sharpens motion detection in Drosophila
Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.
2015-01-01
SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048
Cytoscape tools for the web age: D3.js and Cytoscape.js exporters
Ono, Keiichiro; Demchak, Barry; Ideker, Trey
2014-01-01
In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them. PMID:25520778
Cytoscape tools for the web age: D3.js and Cytoscape.js exporters.
Ono, Keiichiro; Demchak, Barry; Ideker, Trey
2014-01-01
In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them.
How static media is understood and used by high school science teachers
NASA Astrophysics Data System (ADS)
Hirata, Miguel
The purpose of the present study is to explore the role of static media in textbooks, as defined by Mayer (2001) in the form of printed images and text, and how these media are viewed and used by high school science teachers. Textbooks appeared in the United States in the late 1800s, and since then pictorial aids have been used extensively in them to support the teacher's work in the classroom (Giordano, 2003). According to Woodward, Elliott, and Nagel (1988/2013) the research on textbooks prior to the 1970s doesn't present relevant work related to the curricular role and the quality and instructional design of textbooks. Since then there has been abundant research, specially on the use of visual images in textbooks that has been approached from: (a) the text/image ratio (Evans, Watson, & Willows, 1987; Levin & Mayer, 1993; Mayer, 1993; Woodward, 1993), and (b) the instructional effectiveness of images (Woodward, 1993). The theoretical framework for this study comes from multimedia learning (Mayer, 2001), information design (Pettersson, 2002), and visual literacy (Moore & Dwyer, 1994). Data was collected through in-depth interviews of three high school science teachers and the graphic analyses of three textbooks used by the interviewed teachers. The interview data were compared through an analytic model developed from the literature, and the graphic analyses were performed using Mayer's multimedia learning principles (Mayer, 2001) and the Graphic Analysis Protocol (GAP) (Slough & McTigue, 2013). The conclusions of this study are: (1) pictures are specially useful for teaching science because science is a difficult subject to teach, (2) due this difficulty, pictures are very important to make the class dynamic and avoid students distraction, (3) static and dynamic media when used together can be more effective, (4) some specific type of graphics were found in the science textbooks used by the participants, in this case they were naturalistic drawings, stylized drawings, scale diagram, flow chart - cycle, flow chart - sequence, and hybrids, no photographs were found, (5) graphics can be related not only to the general text but specifically to the captions, (6) the textbooks analyzed had a balanced proportion of text and graphics, and (7) to facilitate the text-graphics relationship the spatial contiguity of both elements is key to their semantic integration.
Lensless microscopy technique for static and dynamic colloidal systems.
Alvarez-Palacio, D C; Garcia-Sucerquia, J
2010-09-15
We present the application of a lensless microscopy technique known as digital in-line holographic microscopy (DIHM) to image dynamic and static colloidal systems of microspheres. DIHM has been perfected up to the point that submicrometer lateral resolution with several hundreds of micrometers depth of field is achieved with visible light; it is shown that the lateral resolution of DIHM is enough to resolve self-assembled colloidal monolayers built up from polystyrene spheres with submicrometer diameters. The time resolution of DIHM is of the order of 4 frames/s at 2048 x 2048 pixels, which represents an overall improvement of 16 times the time resolution of confocal scanning microscopy. This feature is applied to the visualization of the migration of dewetting fronts in dynamic colloidal systems and the formation of front-like arrangements of particles. Copyright 2010 Elsevier Inc. All rights reserved.
Color visual simulation applications at the Defense Mapping Agency
NASA Astrophysics Data System (ADS)
Simley, J. D.
1984-09-01
The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.
Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.
McGregor, T J; Spence, D J; Coutts, D W
2008-01-01
We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.
Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C
2018-05-09
Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn
2011-01-01
Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…
ERIC Educational Resources Information Center
Korat, Ofra; Levin, Iris; Atishkin, Shifra; Turgeman, Merav
2014-01-01
We investigated the effects of three facilitators: adults' support, dynamic visual vocabulary support and static visual vocabulary support on vocabulary acquisition in the context of e-book reading. Participants were 144 Israeli Hebrew-speaking preschoolers (aged 4-6) from middle SES neighborhoods. The entire sample read the e-book without a…
Microreact: visualizing and sharing data for genomic epidemiology and phylogeography
Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.
2016-01-01
Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833
Prosperini, Luca; Fanelli, Fulvia; Petsas, Nikolaos; Sbardella, Emilia; Tona, Francesca; Raz, Eytan; Fortuna, Deborah; De Angelis, Floriana; Pozzilli, Carlo; Pantano, Patrizia
2014-11-01
To determine if high-intensity, task-oriented, visual feedback training with a video game balance board (Nintendo Wii) induces significant changes in diffusion-tensor imaging ( DTI diffusion-tensor imaging ) parameters of cerebellar connections and other supratentorial associative bundles and if these changes are related to clinical improvement in patients with multiple sclerosis. The protocol was approved by local ethical committee; each participant provided written informed consent. In this 24-week, randomized, two-period crossover pilot study, 27 patients underwent static posturography and brain magnetic resonance (MR) imaging at study entry, after the first 12-week period, and at study termination. Thirteen patients started a 12-week training program followed by a 12-week period without any intervention, while 14 patients received the intervention in reverse order. Fifteen healthy subjects also underwent MR imaging once and underwent static posturography. Virtual dissection of white matter tracts was performed with streamline tractography; values of DTI diffusion-tensor imaging parameters were then obtained for each dissected tract. Repeated measures analyses of variance were performed to evaluate whether DTI diffusion-tensor imaging parameters significantly changed after intervention, with false discovery rate correction for multiple hypothesis testing. There were relevant differences between patients and healthy control subjects in postural sway and DTI diffusion-tensor imaging parameters (P < .05). Significant main effects of time by group interaction for fractional anisotropy and radial diffusivity of the left and right superior cerebellar peduncles were found (F2,23 range, 5.555-3.450; P = .036-.088 after false discovery rate correction). These changes correlated with objective measures of balance improvement detected at static posturography (r = -0.381 to 0.401, P < .05). However, both clinical and DTI diffusion-tensor imaging changes did not persist beyond 12 weeks after training. Despite the low statistical power (35%) due to the small sample size, the results showed that training with the balance board system modified the microstructure of superior cerebellar peduncles. The clinical improvement observed after training might be mediated by enhanced myelination-related processes, suggesting that high-intensity, task-oriented exercises could induce favorable microstructural changes in the brains of patients with multiple sclerosis.
Three-dimensional high-speed optical coherence tomography imaging of lamina cribrosa in glaucoma.
Inoue, Ryo; Hangai, Masanori; Kotera, Yuriko; Nakanishi, Hideo; Mori, Satoshi; Morishita, Shiho; Yoshimura, Nagahisa
2009-02-01
To evaluate the appearance of the optic nerve head and lamina cribrosa in patients with glaucoma using spectral/Fourier-domain optical coherence tomography (SD-OCT) and to test for a correlation between lamina cribrosa thickness measured on SD-OCT images and visual field loss. Observational case series. We evaluated 52 eyes of 30 patients with glaucoma or ocular hypertension. The high-speed SD-OCT equipment used was a prototype system developed for 3-dimensional (3D) imaging. It had a sensitivity of 98 decibels (dB), a tissue axial resolution of 4.3 mum, and an acquisition rate of approximately 18,700 axial scans per second. For 3D analyses, a raster scan protocol of 256 x 256 axial scans covering a 2.8 x 2.8 mm disc area was used. Lamina cribrosa thickness was measured on 3D images using 3D image processing software. Correlation between lamina cribrosa thickness and mean deviation (MD) values obtained using static automatic perimetry were tested for statistical significance. Clarity of lamina cribrosa features, lamina cribrosa thickness, and MD values on static automatic perimetry. On 3D images, the lamina cribrosa appeared clearly as a highly reflective plate that was bowed posteriorly and contained many circular areas of low reflectivity. The dots of low reflectivity visible just beneath the anterior surface of the lamina cribrosa in en face cross-sections corresponded with dots representing lamina pores in color fundus photographs. The mean (+/-1 standard deviation) thickness of the lamina cribrosa was 190.5+/-52.7 mum (range, 80.5-329.0). Spearman rank testing and linear regression analysis showed that lamina cribrosa thickness correlated significantly with MD (Spearman sigma = 0.744; P<0.001; r(2) = 0.493; P<0.001). Different observers performed measurements of the lamina cribrosa thickness in SD-OCT cross-sectional images with high reproducibility (intraclass correlation coefficient = 0.784). These 3D SD-OCT imaging clearly demonstrated the 3D structure of the lamina cribrosa and allowed measurement of its thickness, which correlated significantly with visual field loss, in living patients with glaucoma. This noninvasive imaging technique should facilitate investigations of structural changes in the optic nerve head lamina cribrosa in eyes with optic nerve damage due to glaucoma. The authors have no proprietary or commercial interest in any materials discussed in this article.
Robson, Anthony G; Kulikowski, Janus J
2012-11-01
The aim was to investigate the temporal response properties of magnocellular, parvocellular, and koniocellular visual pathways using increment/decrement changes in contrast to elicit visual evoked potentials (VEPs). Static achromatic and isoluminant chromatic gratings were generated on a monitor. Chromatic gratings were modulated along red/green (R/G) or subject-specific tritanopic confusion axes, established using a minimum distinct border criterion. Isoluminance was determined using minimum flicker photometry. Achromatic and chromatic VEPs were recorded to contrast increments and decrements of 0.1 or 0.2 superimposed on the static gratings (masking contrast 0-0.6). Achromatic increment/decrement changes in contrast evoked a percept of apparent motion when the spatial frequency was low; VEPs to such stimuli were positive in polarity and largely unaffected by high levels of static contrast, consistent with transient response mechanisms. VEPs to finer achromatic gratings showed marked attenuation as static contrast was increased. Chromatic VEPs to R/G or tritan chromatic contrast increments were of negative polarity and showed progressive attenuation as static contrast was increased, in keeping with increasing desensitization of the sustained responses of the color-opponent visual pathways. Chromatic contrast decrement VEPs were of positive polarity and less sensitive to pattern adaptation. The relative contribution of sustained/transient mechanisms to achromatic processing is spatial frequency dependent. Chromatic contrast increment VEPs reflect the sustained temporal response properties of parvocellular and koniocellular pathways. Cortical VEPs can provide an objective measure of pattern adaptation and can be used to probe the temporal response characteristics of different visual pathways.
Perceptual transparency from image deformation.
Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya
2015-08-18
Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.
ERIC Educational Resources Information Center
Lin, Huifen; Chen, Tsuiping; Dwyer, Francis M.
2006-01-01
The purpose of this experimental study was to compare the effects of using static visuals versus computer-generated animation to enhance learners' comprehension and retention of a content-based lesson in a computer-based learning environment for learning English as a foreign language (EFL). Fifty-eight students from two EFL reading sections were…
ERIC Educational Resources Information Center
Pfeiffer, Vanessa D. I.; Scheiter, Katharina; Kuhl, Tim; Gemballa, Sven
2011-01-01
This study investigated whether studying dynamic-static visualizations prepared first-year Biology students better for an out-of-classroom experience in an aquarium than learning how to identify species with more traditional instructional materials. During an initial classroom phase, learners either watched underwater videos of 15 freshwater fish…
ERIC Educational Resources Information Center
Yang, Eunice
2016-01-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body…
Cool but counterproductive: interactive, Web-based risk communications can backfire.
Zikmund-Fisher, Brian J; Dickson, Mark; Witteman, Holly O
2011-08-25
Paper-based patient decision aids generally present risk information using numbers and/or static images. However, limited psychological research has suggested that when people interactively graph risk information, they process the statistics more actively, making the information more available for decision making. Such interactive tools could potentially be incorporated in a new generation of Web-based decision aids. The objective of our study was to investigate whether interactive graphics detailing the risk of side effects of two treatments improve knowledge and decision making over standard risk graphics. A total of 3371 members of a demographically diverse Internet panel viewed a hypothetical scenario about two hypothetical treatments for thyroid cancer. Each treatment had a chance of causing 1 of 2 side effects, but we randomly varied whether one treatment was better on both dimensions (strong dominance condition), slightly better on only one dimension (mild dominance condition), or better on one dimension but worse on the other (trade-off condition) than the other treatment. We also varied whether respondents passively viewed the risk information in static pictograph (icon array) images or actively manipulated the information by using interactive Flash-based animations of "fill-in-the-blank" pictographs. Our primary hypothesis was that active manipulation would increase respondents' ability to recognize dominance (when available) and choose the better treatment. The interactive risk graphic conditions had significantly worse survey completion rates (1110/1695, 65.5% vs 1316/1659, 79.3%, P < .001) than the static image conditions. In addition, respondents using interactive graphs were less likely to recognize and select the dominant treatment option (234/380, 61.6% vs 343/465, 73.8%, P < .001 in the strong dominance condition). Interactivity, however visually appealing, can both add to respondent burden and distract people from understanding relevant statistical information. Decision-aid developers need to be aware that interactive risk presentations may create worse outcomes than presentations of static risk graphic formats.
Cool but Counterproductive: Interactive, Web-Based Risk Communications Can Backfire
Dickson, Mark; Witteman, Holly O
2011-01-01
Background Paper-based patient decision aids generally present risk information using numbers and/or static images. However, limited psychological research has suggested that when people interactively graph risk information, they process the statistics more actively, making the information more available for decision making. Such interactive tools could potentially be incorporated in a new generation of Web-based decision aids. Objective The objective of our study was to investigate whether interactive graphics detailing the risk of side effects of two treatments improve knowledge and decision making over standard risk graphics. Methods A total of 3371 members of a demographically diverse Internet panel viewed a hypothetical scenario about two hypothetical treatments for thyroid cancer. Each treatment had a chance of causing 1 of 2 side effects, but we randomly varied whether one treatment was better on both dimensions (strong dominance condition), slightly better on only one dimension (mild dominance condition), or better on one dimension but worse on the other (trade-off condition) than the other treatment. We also varied whether respondents passively viewed the risk information in static pictograph (icon array) images or actively manipulated the information by using interactive Flash-based animations of “fill-in-the-blank” pictographs. Our primary hypothesis was that active manipulation would increase respondents’ ability to recognize dominance (when available) and choose the better treatment. Results The interactive risk graphic conditions had significantly worse survey completion rates (1110/1695, 65.5% vs 1316/1659, 79.3%, P < .001) than the static image conditions. In addition, respondents using interactive graphs were less likely to recognize and select the dominant treatment option (234/380, 61.6% vs 343/465, 73.8%, P < .001 in the strong dominance condition). Conclusions Interactivity, however visually appealing, can both add to respondent burden and distract people from understanding relevant statistical information. Decision-aid developers need to be aware that interactive risk presentations may create worse outcomes than presentations of static risk graphic formats. PMID:21868349
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
Sato, Wataru; Toichi, Motomi; Uono, Shota; Kochiyama, Takanori
2012-08-13
Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex-MTG-IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guang, E-mail: lig2@mskcc.org; Wei, Jie; Kadbi, Mo
Purpose: To develop and evaluate a super-resolution approach to reconstruct time-resolved 4-dimensional magnetic resonance imaging (TR-4DMRI) with a high spatiotemporal resolution for multi-breathing cycle motion assessment. Methods and Materials: A super-resolution approach was developed to combine fast 3-dimensional (3D) cine MRI with low resolution during free breathing (FB) and high-resolution 3D static MRI during breath hold (BH) using deformable image registration. A T1-weighted, turbo field echo sequence, coronal 3D cine acquisition, partial Fourier approximation, and SENSitivity Encoding parallel acceleration were used. The same MRI pulse sequence, field of view, and acceleration techniques were applied in both FB and BH acquisitions;more » the intensity-based Demons deformable image registration method was used. Under an institutional review board–approved protocol, 7 volunteers were studied with 3D cine FB scan (voxel size: 5 × 5 × 5 mm{sup 3}) at 2 Hz for 40 seconds and a 3D static BH scan (2 × 2 × 2 mm{sup 3}). To examine the image fidelity of 3D cine and super-resolution TR-4DMRI, a mobile gel phantom with multi-internal targets was scanned at 3 speeds and compared with the 3D static image. Image similarity among 3D cine, 4DMRI, and 3D static was evaluated visually using difference image and quantitatively using voxel intensity correlation and Dice index (phantom only). Multi-breathing-cycle waveforms were extracted and compared in both phantom and volunteer images using the 3D cine as the references. Results: Mild imaging artifacts were found in the 3D cine and TR-4DMRI of the mobile gel phantom with a Dice index of >0.95. Among 7 volunteers, the super-resolution TR-4DMRI yielded high voxel-intensity correlation (0.92 ± 0.05) and low voxel-intensity difference (<0.05). The detected motion differences between TR-4DMRI and 3D cine were −0.2 ± 0.5 mm (phantom) and −0.2 ± 1.9 mm (diaphragms). Conclusion: Super-resolution TR-4DMRI has been reconstructed with adequate temporal (2 Hz) and spatial (2 × 2 × 2 mm{sup 3}) resolutions. Further TR-4DMRI characterization and improvement are necessary before clinical applications. Multi-breathing cycles can be examined, providing patient-specific breathing irregularities and motion statistics for future 4D radiation therapy.« less
High-speed multi-frame laser Schlieren for visualization of explosive events
NASA Astrophysics Data System (ADS)
Clarke, S. A.; Murphy, M. J.; Landon, C. D.; Mason, T. A.; Adrian, R. J.; Akinci, A. A.; Martinez, M. E.; Thomas, K. A.
2007-09-01
High-Speed Multi-Frame Laser Schlieren is used for visualization of a range of explosive and non-explosive events. Schlieren is a well-known technique for visualizing shock phenomena in transparent media. Laser backlighting and a framing camera allow for Schlieren images with very short (down to 5 ns) exposure times, band pass filtering to block out explosive self-light, and 14 frames of a single explosive event. This diagnostic has been applied to several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on "cut-back" detonators with varying initial pressing (IP) heights. We have also used this Diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. The setup has also been used to visualize a range of other explosive events, such as explosively driven metal shock experiments and explosively driven microjets. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. Finite element codes (EPIC, CTH) have been used to analyze the schlieren images to determine likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. These experiments are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation.
NASA Astrophysics Data System (ADS)
Moon, Hye Sun
Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.
Local statistics of retinal optic flow for self-motion through natural sceneries.
Calow, Dirk; Lappe, Markus
2007-12-01
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Lower pitch is larger, yet falling pitches shrink.
Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E
2014-01-01
Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.
A review of uncertainty visualization within the IPCC reports
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Reusser, Dominik; Wrobel, Markus
2015-04-01
Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that further translation is required to (visually) present the IPCC results to non-experts, providing tailored static and interactive visualization solutions for different user groups.
NASA Astrophysics Data System (ADS)
Lopez, Andrew L.; Wang, Shang; Garcia, Monica; Valladolid, Christian; Larin, Kirill V.; Larina, Irina V.
2015-03-01
Understanding mouse embryonic development is an invaluable resource for our interpretation of normal human embryology and congenital defects. Our research focuses on developing methods for live imaging and dynamic characterization of early embryonic development in mouse models of human diseases. Using multidisciplinary methods: optical coherence tomography (OCT), live mouse embryo manipulations and static embryo culture, molecular biology, advanced image processing and computational modeling we aim to understand developmental processes. We have developed an OCT based approach to image live early mouse embryos (E8.5 - E9.5) cultured on an imaging stage and visualize developmental events with a spatial resolution of a few micrometers (less than the size of an individual cell) and a frame rate of up to hundreds of frames per second and reconstruct cardiodynamics in 4D (3D+time). We are now using these methods to study how specific embryonic lethal mutations affect cardiac morphology and function during early development.
NASA Astrophysics Data System (ADS)
Takahashi, Kazuki; Taki, Hirofumi; Onishi, Eiko; Yamauchi, Masanori; Kanai, Hiroshi
2017-07-01
Epidural anesthesia is a common technique for perioperative analgesia and chronic pain treatment. Since ultrasonography is insufficient for depicting the human vertebral surface, most examiners apply epidural puncture by body surface landmarks on the back such as the spinous process and scapulae without any imaging, including ultrasonography. The puncture route to the epidural space at thoracic vertebrae is much narrower than that at lumber vertebrae, and therefore, epidural anesthesia at thoracic vertebrae is difficult, especially for a beginner. Herein, a novel imaging method is proposed based on a bi-static imaging technique by making use of the transmit beam width and direction. In an in vivo experimental study on human thoracic vertebrae, the proposed method succeeded in depicting the vertebral surface clearly as compared with conventional B-mode imaging and the conventional envelope method. This indicates the potential of the proposed method in visualizing the vertebral surface for the proper and safe execution of epidural anesthesia.
Interactive Volume Rendering of Diffusion Tensor Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred
As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal wasmore » to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].« less
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Leeburg, Kelsey C.; Terrones, Benjamin D.; Tao, Yuankai K.
2018-02-01
Limited visualization of semi-transparent structures in the eye remains a critical barrier to improving clinical outcomes and developing novel surgical techniques. While increases in imaging speed has enabled intraoperative optical coherence tomography (iOCT) imaging of surgical dynamics, several critical barriers to clinical adoption remain. Specifically, these include (1) static field-of-views (FOVs) requiring manual instrument-tracking; (2) high frame-rates require sparse sampling, which limits FOV; and (3) small iOCT FOV also limits the ability to co-register data with surgical microscopy. We previously addressed these limitations in image-guided ophthalmic microsurgery by developing microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography. Complementary en face images enabled orientation and coregistration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. In addition, we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Unfortunately, our previous system lacked the resolution and optical throughput for in vivo retinal imaging and necessitated removal of cornea and lens. These limitations were predominately a result of optical aberrations from imaging through a shared surgical microscope objective lens, which was modeled as a paraxial surface. Here, we present an optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) system. We use a novel lens characterization method to develop an accurate model of surgical microscope objective performance and balance out inherent aberrations using iSECTR relay optics. Using this system, we demonstrate in vivo multimodal ophthalmic imaging through a surgical microscope
The Effects of Attention Cueing on Visualizers' Multimedia Learning
ERIC Educational Resources Information Center
Yang, Hui-Yu
2016-01-01
The present study examines how various types of attention cueing and cognitive preference affect learners' comprehension of a cardiovascular system and cognitive load. EFL learners were randomly assigned to one of four conditions: non-signal, static-blood-signal, static-blood-static-arrow-signal, and animation-signal. The results indicated that…
Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T
2015-10-01
Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Laser Speckle Imaging of Blood Flow Beneath Static Scattering Media
NASA Astrophysics Data System (ADS)
Regan, Caitlin Anderson
Laser speckle imaging (LSI) is a wide-field optical imaging technique that provides information about the movement of scattering particles in biological samples. LSI is used to create maps of relative blood flow and perfusion in samples such as the skin, brain, teeth, gingiva, and other biological tissues. The presence of static, or non-moving, optical scatterers affects the ability of LSI to provide true quantitative and spatially resolved measurements of blood flow. With in vitro experiments using tissue-simulating phantoms, we determined that temporal analysis of raw speckle image sequences improved the quantitative accuracy of LSI to measure flow beneath a static scattering layer. We then applied the temporal algorithm to assess the potential of LSI to monitor oral health. We designed and tested two generations of miniature LSI devices to measure flow in the pulpal chamber of teeth and in the gingiva. Our preliminary clinical pilot data indicated that speckle contrast may correlate with gingival health. To improve visualization of subsurface blood vessels, we developed a technique called photothermal LSI. We applied a short pulse of laser energy to selectively perturb the motion of red blood cells, increasing the signal from vasculature relative to the surroundings. To study the spectral and depth dependence of laser speckle contrast, we developed a Monte Carlo model of light and momentum transport to simulate speckle contrast. With an increase in the thickness of the overlying static-scattering layer, we observed a quadratic decrease in the quantity of dynamically scattered light collected by the detector. We next applied the model to study multi-exposure speckle imaging (MESI), a method that purportedly improves quantitative accuracy of subsurface blood flow measurements. We unexpectedly determined that MESI faced similar depth limitations as conventional LSI, findings that were supported by in vitro experimental data. Finally, we used the model to study the effects of epidermal melanin absorption on LSI, and demonstrated that speckle contrast is less sensitive to varying melanin content than reflectance. We then proposed a two-wavelength measurement protocol that may enable melanin-independent LSI measurements of blood flow in patients with varying skin types. In conclusion, through in vitro and in silico experiments, we were able to further the understanding of the depth dependent origins of laser speckle contrast as well as the inherent limitations of this technology.
Quick, Harald H; Zenge, Michael O; Kuehl, Hilmar; Kaiser, Gernot; Aker, Stephanie; Massing, Sandra; Bosk, Silke; Ladd, Mark E
2005-02-01
Active instrument visualization strategies for interventional MR angiography (MRA) require vascular instruments to be equipped with some type of radiofrequency (RF) coil or dipole RF antenna for MR signal detection. Such visualization strategies traditionally necessitate a connection to the scanner with either coaxial cable or laser fibers. In order to eliminate any wire connection, RF resonators that inductively couple their signal to MR surface coils were implemented into catheters to enable wireless active instrument visualization. Instrument background to contrast-to-noise ratio was systematically investigated as a function of the excitation flip angle. Signal coupling between the catheter RF coil and surface RF coils was evaluated qualitatively and quantitatively as a function of the catheter position and orientation with regard to the static magnetic field B0 and to the surface coils. In vivo evaluation of the instruments was performed in interventional MRA procedures on five pigs under MR guidance. Cartesian and projection reconstruction TrueFISP imaging enabled simultaneous visualization of the instruments and vascular morphology in real time. The implementation of RF resonators enabled robust visualization of the catheter curvature to the very tip. Additionally, the active visualization strategy does not require any wire connection to the scanner and thus does not hamper the interventionalist during the course of an intervention.
GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.
Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L
2015-10-15
A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.
Cone Photoreceptor Abnormalities Correlate with Vision Loss in Patients with Stargardt Disease
Chen, Yingming; Ratnam, Kavitha; Sundquist, Sanna M.; Lujan, Brandon; Ayyagari, Radha; Gudiseva, V. Harini; Roorda, Austin
2011-01-01
Purpose. To study the relationship between macular cone structure, fundus autofluorescence (AF), and visual function in patients with Stargardt disease (STGD). Methods. High-resolution images of the macula were obtained with adaptive optics scanning laser ophthalmoscopy (AOSLO) and spectral domain optical coherence tomography in 12 patients with STGD and 27 age-matched healthy subjects. Measures of retinal structure and AF were correlated with visual function, including best-corrected visual acuity, color vision, kinetic and static perimetry, fundus-guided microperimetry, and full-field electroretinography. Mutation analysis of the ABCA4 gene was completed in all patients. Results. Patients were 15 to 55 years old, and visual acuity ranged from 20/25–20/320. Central scotomas were present in all patients, although the fovea was spared in three patients. The earliest cone spacing abnormalities were observed in regions of homogeneous AF, normal visual function, and normal outer retinal structure. Outer retinal structure and AF were most normal near the optic disc. Longitudinal studies showed progressive increases in AF followed by reduced AF associated with losses of visual sensitivity, outer retinal layers, and cones. At least one disease-causing mutation in the ABCA4 gene was identified in 11 of 12 patients studied; 1 of 12 patients showed no disease-causing ABCA4 mutations. Conclusions. AOSLO imaging demonstrated abnormal cone spacing in regions of abnormal fundus AF and reduced visual function. These findings provide support for a model of disease progression in which lipofuscin accumulation results in homogeneously increased AF with cone spacing abnormalities, followed by heterogeneously increased AF with cone loss, then reduced AF with cone and RPE cell death. (ClinicalTrials.gov number, NCT00254605.) PMID:21296825
Learning Peri-saccadic Remapping of Receptive Field from Experience in Lateral Intraparietal Area.
Wang, Xiao; Wu, Yan; Zhang, Mingsha; Wu, Si
2017-01-01
Our eyes move constantly at a frequency of 3-5 times per second. These movements, called saccades, induce the sweeping of visual images on the retina, yet we perceive the world as stable. It has been suggested that the brain achieves this visual stability via predictive remapping of neuronal receptive field (RF). A recent experimental study disclosed details of this remapping process in the lateral intraparietal area (LIP), that is, about the time of the saccade, the neuronal RF expands along the saccadic trajectory temporally, covering the current RF (CRF), the future RF (FRF), and the region the eye will sweep through during the saccade. A cortical wave (CW) model was also proposed, which attributes the RF remapping as a consequence of neural activity propagating in the cortex, triggered jointly by a visual stimulus and the corollary discharge (CD) signal responsible for the saccade. In this study, we investigate how this CW model is learned naturally from visual experiences at the development of the brain. We build a two-layer network, with one layer consisting of LIP neurons and the other superior colliculus (SC) neurons. Initially, neuronal connections are random and non-selective. A saccade will cause a static visual image to sweep through the retina passively, creating the effect of the visual stimulus moving in the opposite direction of the saccade. According to the spiking-time-dependent-plasticity rule, the connection path in the opposite direction of the saccade between LIP neurons and the connection path from SC to LIP are enhanced. Over many such visual experiences, the CW model is developed, which generates the peri-saccadic RF remapping in LIP as observed in the experiment.
Learning Peri-saccadic Remapping of Receptive Field from Experience in Lateral Intraparietal Area
Wang, Xiao; Wu, Yan; Zhang, Mingsha; Wu, Si
2017-01-01
Our eyes move constantly at a frequency of 3–5 times per second. These movements, called saccades, induce the sweeping of visual images on the retina, yet we perceive the world as stable. It has been suggested that the brain achieves this visual stability via predictive remapping of neuronal receptive field (RF). A recent experimental study disclosed details of this remapping process in the lateral intraparietal area (LIP), that is, about the time of the saccade, the neuronal RF expands along the saccadic trajectory temporally, covering the current RF (CRF), the future RF (FRF), and the region the eye will sweep through during the saccade. A cortical wave (CW) model was also proposed, which attributes the RF remapping as a consequence of neural activity propagating in the cortex, triggered jointly by a visual stimulus and the corollary discharge (CD) signal responsible for the saccade. In this study, we investigate how this CW model is learned naturally from visual experiences at the development of the brain. We build a two-layer network, with one layer consisting of LIP neurons and the other superior colliculus (SC) neurons. Initially, neuronal connections are random and non-selective. A saccade will cause a static visual image to sweep through the retina passively, creating the effect of the visual stimulus moving in the opposite direction of the saccade. According to the spiking-time-dependent-plasticity rule, the connection path in the opposite direction of the saccade between LIP neurons and the connection path from SC to LIP are enhanced. Over many such visual experiences, the CW model is developed, which generates the peri-saccadic RF remapping in LIP as observed in the experiment. PMID:29249953
Gosch, D; Ratzmer, A; Berauer, P; Kahn, T
2007-09-01
The objective of this study was to examine the extent to which the image quality on mobile C-arms can be improved by an innovative exposure rate control system (grid control). In addition, the possible dose reduction in the pulsed fluoroscopy mode using 25 pulses/sec produced by automatic adjustment of the pulse rate through motion detection was to be determined. As opposed to conventional exposure rate control systems, which use a measuring circle in the center of the field of view, grid control is based on a fine mesh of square cells which are overlaid on the entire fluoroscopic image. The system uses only those cells for exposure control that are covered by the object to be visualized. This is intended to ensure optimally exposed images, regardless of the size, shape and position of the object to be visualized. The system also automatically detects any motion of the object. If a pulse rate of 25 pulses/sec is selected and no changes in the image are observed, the pulse rate used for pulsed fluoroscopy is gradually reduced. This may decrease the radiation exposure. The influence of grid control on image quality was examined using an anthropomorphic phantom. The dose reduction achieved with the help of object detection was determined by evaluating the examination data of 146 patients from 5 different countries. The image of the static phantom made with grid control was always optimally exposed, regardless of the position of the object to be visualized. The average dose reduction when using 25 pulses/sec resulting from object detection and automatic down-pulsing was 21 %, and the maximum dose reduction was 60 %. Grid control facilitates C-arm operation, since optimum image exposure can be obtained independently of object positioning. Object detection may lead to a reduction in radiation exposure for the patient and operating staff.
Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI
Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.
2017-01-01
Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
Sontag, Jennah M; Barnes, Spencer R
2017-09-26
Visual framing can improve health-message effectiveness. Narrative structure provides a template needed for determining how to frame visuals to maximise message effectiveness. Participants (N = 190) were assigned to a message condition determined by segments (establisher, initial, peak), graphic (static, animated) and cancer (lung, melanoma). ANOVAs revealed that melanoma was more believable than lung cancer with static graphics at the establisher and peak; narratives were more believable with animated graphics at the peak segment; melanoma elicited greater positive attitudes; graphics in the peak influenced greatest intentions. Animated graphics visually framed to emphasise information at the establisher and peak segments suggest maximum effectiveness.
Edge enhancement algorithm for low-dose X-ray fluoroscopic imaging.
Lee, Min Seok; Park, Chul Hee; Kang, Moon Gi
2017-12-01
Low-dose X-ray fluoroscopy has continually evolved to reduce radiation risk to patients during clinical diagnosis and surgery. However, the reduction in dose exposure causes quality degradation of the acquired images. In general, an X-ray device has a time-average pre-processor to remove the generated quantum noise. However, this pre-processor causes blurring and artifacts within the moving edge regions, and noise remains in the image. During high-pass filtering (HPF) to enhance edge detail, this noise in the image is amplified. In this study, a 2D edge enhancement algorithm comprising region adaptive HPF with the transient improvement (TI) method, as well as artifacts and noise reduction (ANR), was developed for degraded X-ray fluoroscopic images. The proposed method was applied in a static scene pre-processed by a low-dose X-ray fluoroscopy device. First, the sharpness of the X-ray image was improved using region adaptive HPF with the TI method, which facilitates sharpening of edge details without overshoot problems. Then, an ANR filter that uses an edge directional kernel was developed to remove the artifacts and noise that can occur during sharpening, while preserving edge details. The quantitative and qualitative results obtained by applying the developed method to low-dose X-ray fluoroscopic images and visually and numerically comparing the final images with images improved using conventional edge enhancement techniques indicate that the proposed method outperforms existing edge enhancement methods in terms of objective criteria and subjective visual perception of the actual X-ray fluoroscopic image. The developed edge enhancement algorithm performed well when applied to actual low-dose X-ray fluoroscopic images, not only by improving the sharpness, but also by removing artifacts and noise, including overshoot. Copyright © 2017 Elsevier B.V. All rights reserved.
Perceptual response to visual noise and display media
NASA Technical Reports Server (NTRS)
Durgin, Frank H.; Proffitt, Dennis R.
1993-01-01
The present project was designed to follow up an earlier investigation in which perceptual adaptation in response to the use of Night Vision Goggles, or image intensification (I squared) systems, such as those employed in the military were studied. Our chief concern in the earlier studies was with the dynamic visual noise that is a byproduct of the I(sup 2) technology: under low light conditions, there is a great deal of 'snow' or sporadic 'twinkling' of pixels in the I(sup 2) display which is more salient as the ambient light levels are lower. Because prolonged exposure to static visual noise produces strong adaptation responses, we reasoned that the dynamic visual noise of I(sup 2) displays might have a similar effect, which could have implications for their long term use. However, in the series of experiments reported last year, no evidence at all of such aftereffects following extended exposure to I(sup 2) displays were found. This finding surprised us, and led us to propose the following studies: (1) an investigation of dynamic visual noise and its capacity to produce after effects; and (2) an investigation of the perceptual consequences of characteristics of the display media.
Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma.
Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E; Bollinger, Kathryn; Devos, Hannes
2017-01-01
Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal-Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups ( p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1-Q3) 3 (2-6.50) vs. 2 (0.50-2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2-6) vs. 1 (0.50-2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls ( p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.
Stapf, Daniel; Franke, Andreas; Schreckenberg, Marcus; Schummers, Georg; Mischke, Karl; Marx, Nikolaus; Schauerte, Patrick; Knackstedt, Christian
2013-04-01
Three-dimensional (3D)-imaging provides important information on cardiac anatomy during electrophysiological procedures. Real-time updates of modalities with high soft-tissue contrast are particularly advantageous during cardiac procedures. Therefore, a beat to beat 3D visualization of cardiac anatomy by intracardiac echocardiography (ICE) was developed and tested in phantoms and animals. An electronic phased-array 5-10 MHz ICE-catheter (Acuson, AcuNav/Siemens Medical Solutions USA/64 elements) providing a 90° sector image was used for ICE-imaging. A custom-made mechanical prototype controlled by a servo motor allowed automatic rotation of the ICE-catheter around its longitudinal axis. During a single heartbeat, the ICE-catheter was rotated and 2D-images were acquired. Reconstruction into a 3D volume and rendering by a prototype software was performed beat to beat. After experimental validation using a rigid phantom, the system was tested in an animal study and afterwards, for quantitative validation, in a dynamic phantom. Acquisition of beat to beat 3D-reconstruction was technically feasible. However, twisting of the ICE-catheter shaft due to friction and torsion was found and rotation was hampered. Also, depiction of catheters was not always ensured in case of parallel alignment. Using a curved sheath for depiction of cardiac anatomy there was no congruent depiction of shape and dimension of static and moving objects. Beat to beat 3D-ICE-imaging is feasible. However, shape and dimension of static and moving objects cannot always be displayed with necessary steadiness as needed in the clinical setting. As catheter depiction is also limited, clinical use seems impossible.
el-Shirbiny, A; Fernandez, R; Zuckier, L S
1995-08-01
Tc-99m RBC scintigraphy is favored by many investigators because it provides the ability to image the abdomen over a prolonged period of time, thereby allowing identification of delayed bleeding sites that are frequently encountered due to the intermittent nature of gastrointestinal bleeding. The authors describe a case of bleeding scintigraphy with labeled red blood cells in which the bleeding site was identifiable only on the dynamic blood-flow and first static images. On later images, the labeled blood cells had spread throughout the colon, rendering localization of the actual bleeding site impossible. Two previous red blood cell scintigraphies and a subsequent contrast angiogram did not reveal sites of active bleeding. As illustrated by this unusual case, factors governing timing and visualization of abnormal bleeding sites are discussed, as is a differential diagnosis of abnormal foci of activity seen on the dynamic phase of bleeding scintigraphy.
Holographic space: presence and absence in time
NASA Astrophysics Data System (ADS)
Chang, Yin-Ren; Richardson, Martin
2017-03-01
In terms of contemporary art, time-based media generally refers to artworks that have duration as a dimension and unfold to the viewer over time, that could be a video, slide, film, computer-based technologies or audio. As part of this category, holography pushes this visual-oriented narrative a step further, which brings a real 3D image to invite and allow audiences revisiting the scene of the past, at the moment of recording in space and time. Audiences could also experience the kinetic holographic aesthetics through constantly moving the viewing point or illumination source, which creates dynamic visual effects. In other words, when the audience and hologram remain still, the holographic image can only be perceived statically. This unique form of expression is not created by virtual simulation; the principal of wavefront reconstruction process made holographic art exceptional from other time-based media. This project integrates 3D printing technology to explore the nature of material aesthetics, transiting between material world and holographic space. In addition, this series of creation also reveals the unique temporal quality of a hologram's presence and absence, an ambiguous relationship existing in this media.
KEGGParser: parsing and editing KEGG pathway maps in Matlab.
Arakelyan, Arsen; Nersisyan, Lilit
2013-02-15
KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.
Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B
2008-09-01
We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.
Why do parallel cortical systems exist for the perception of static form and moving form?
Grossberg, S
1991-02-01
This article analyzes computational properties that clarify why the parallel cortical systems V1----V2, V1----MT, and V1----V2----MT exist for the perceptual processing of static visual forms and moving visual forms. The article describes a symmetry principle, called FM symmetry, that is predicted to govern the development of these parallel cortical systems by computing all possible ways of symmetrically gating sustained cells with transient cells and organizing these sustained-transient cells into opponent pairs of on-cells and off-cells whose output signals are insensitive to direction of contrast. This symmetric organization explains how the static form system (static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast and insensitive to direction of motion, whereas the motion form system (motion BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast but sensitive to direction of motion. FM symmetry clarifies why the geometries of static and motion form perception differ--for example, why the opposite orientation of vertical is horizontal (90 degrees), but the opposite direction of up is down (180 degrees). Opposite orientations and directions are embedded in gated dipole opponent processes that are capable of antagonistic rebound. Negative afterimages, such as the MacKay and waterfall illusions, are hereby explained as are aftereffects of long-range apparent motion. These antagonistic rebounds help to control a dynamic balance between complementary perceptual states of resonance and reset. Resonance cooperatively links features into emergent boundary segmentations via positive feedback in a CC loop, and reset terminates a resonance when the image changes, thereby preventing massive smearing of percepts. These complementary preattentive states of resonance and reset are related to analogous states that govern attentive feature integration, learning, and memory search in adaptive resonance theory. The mechanism used in the V1----MT system to generate a wave of apparent motion between discrete flashes may also be used in other cortical systems to generate spatial shifts of attention. The theory suggests how the V1----V2----MT cortical stream helps to compute moving form in depth and how long-range apparent motion of illusory contours occurs. These results collectively argue against vision theories that espouse independent processing modules. Instead, specialized subsystems interact to overcome computational uncertainties and complementary deficiencies, to cooperatively bind features into context-sensitive resonances, and to realize symmetry principles that are predicted to govern the development of the visual cortex.
Space vehicle approach velocity judgments under simulated visual space conditions
NASA Technical Reports Server (NTRS)
Haines, R. F.
1989-01-01
There were 35 volunteers who responded when they first perceived an increase in apparent size of a collimated, two-dimensional perspective image of an Orbiter vehicle. The variables of interest included the presence (or absence) of a fixed reticle within the field of view (FOV), background starfield velocity, initial range to the vehicle and vehicle closure velocity. It was found that: 1) increasing vehicle approach velocity yielded a very small (but significant) effect of faster detection of vehicle movement, nevertheless, response variability was relatively large; 2) including the fixed reticle in the FOV produced significantly slower detection of vehicle radial movement, however this occurred only at the largest range and the magnitude of the effect was only about 15% of the one sigma value; and 3) increasing background star velocity during this judgment led to slower detection of vehicle movement. While statistically significant, this effect was small and occurred primarily at the largest range. A possible explanation for the last two findings is that other static and dynamic objects within the visual field may compete for available attention which otherwise would be available for judging image expansion; thus, the target's retinal image has to expand more than otherwise for its movement to be detected. This study also showed that the Proximity Operations Research Mockup at NASA/Ames can be used effectively to investigate a variety of visual judgment questions related to future space operations. These findings are discussed in relation to previous research and possible underlying mechanisms.
Minimizing Retraction by Pia-Arachnoidal 10-0 Sutures in Intrasulcal Dissection.
Uluc, Kutluay; Cikla, Ulas; Morkan, Deniz B; Sirin, Alperen; Ahmed, Azam S; Swanson, Kyle; Baskaya, Mustafa K
2018-07-01
In contemporary microneurosurgery reducing retraction-induced injury to the brain is essential. Self-retaining retractor systems are commonly used to improve visualization and decrease the repetitive microtrauma, but sometimes self-retaining retractor systems can be cumbersome and the force applied can cause focal ischemia or contusions. This may increase the morbidity and mortality. Here, we describe a technique of retraction using 10-0 sutures in the arachnoid. To evaluate the imaging and clinical results in patients where 10-0 suture retraction was used to aid the surgical procedure. Adjacent cortex was retracted by placing 10-0 nylon suture in the arachnoid of the bank or banks of the sulcus. The suture was secured to the adjacent dural edge by using aneurysm clips, allowing for easy adjustability of the amount of retraction. We retrospectively analyzed the neurological outcome, signal changes in postoperative imaging, and ease of performing surgery in 31 patients with various intracranial lesions including intracranial aneurysms, intra- and extra-axial tumors, and cerebral ischemia requiring arterial bypass. Clinically, there were no injuries, vascular events, or neurological deficits referable to the relevant cortex. Postoperative imaging did not show changes consistent with ischemia or contusion due to the retraction. This technique improved the visualization and illumination of the surgical field in all cases. Retraction of the arachnoid can be used safely in cases where trans-sulcal dissection is required. This technique may improve initial visualization and decrease the need for dynamic or static retraction.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Novel in vivo techniques to visualize kidney anatomy and function.
Peti-Peterdi, János; Kidokoro, Kengo; Riquier-Brison, Anne
2015-07-01
Intravital imaging using multiphoton microscopy (MPM) has become an increasingly popular and widely used experimental technique in kidney research over the past few years. MPM allows deep optical sectioning of the intact, living kidney tissue with submicron resolution, which is unparalleled among intravital imaging approaches. MPM has solved a long-standing critical technical barrier in renal research to study several complex and inaccessible cell types and anatomical structures in vivo in their native environment. Comprehensive and quantitative kidney structure and function MPM studies helped our better understanding of the cellular and molecular mechanisms of the healthy and diseased kidney. This review summarizes recent in vivo MPM studies with a focus on the glomerulus and the filtration barrier, although select, glomerulus-related renal vascular and tubular functions are also mentioned. The latest applications of serial MPM of the same glomerulus in vivo, in the intact kidney over several days, during the progression of glomerular disease are discussed. This visual approach, in combination with genetically encoded fluorescent markers of cell lineage, has helped track the fate and function (e.g., cell calcium changes) of single podocytes during the development of glomerular pathologies, and provided visual proof for the highly dynamic, rather than static, nature of the glomerular environment. Future intravital imaging applications have the promise to further push the limits of optical microscopy, and to advance our understanding of the mechanisms of kidney injury. Also, MPM will help to study new mechanisms of tissue repair and regeneration, a cutting-edge area of kidney research.
Key clinical benefits of neuroimaging at 7T.
Trattnig, Siegfried; Springer, Elisabeth; Bogner, Wolfgang; Hangel, Gilbert; Strasser, Bernhard; Dymerska, Barbara; Cardoso, Pedro Lima; Robinson, Simon Daniel
2018-03-01
The growing interest in ultra-high field MRI, with more than 35.000 MR examinations already performed at 7T, is related to improved clinical results with regard to morphological as well as functional and metabolic capabilities. Since the signal-to-noise ratio increases with the field strength of the MR scanner, the most evident application at 7T is to gain higher spatial resolution in the brain compared to 3T. Of specific clinical interest for neuro applications is the cerebral cortex at 7T, for the detection of changes in cortical structure, like the visualization of cortical microinfarcts and cortical plaques in Multiple Sclerosis. In imaging of the hippocampus, even subfields of the internal hippocampal anatomy and pathology may be visualized with excellent spatial resolution. Using Susceptibility Weighted Imaging, the plaque-vessel relationship and iron accumulations in Multiple Sclerosis can be visualized, which may provide a prognostic factor of disease. Vascular imaging is a highly promising field for 7T which is dealt with in a separate dedicated article in this special issue. The static and dynamic blood oxygenation level-dependent contrast also increases with the field strength, which significantly improves the accuracy of pre-surgical evaluation of vital brain areas before tumor removal. Improvement in acquisition and hardware technology have also resulted in an increasing number of MR spectroscopic imaging studies in patients at 7T. More recent parallel imaging and short-TR acquisition approaches have overcome the limitations of scan time and spatial resolution, thereby allowing imaging matrix sizes of up to 128×128. The benefits of these acquisition approaches for investigation of brain tumors and Multiple Sclerosis have been shown recently. Together, these possibilities demonstrate the feasibility and advantages of conducting routine diagnostic imaging and clinical research at 7T. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Neural correlates of the perception of dynamic versus static facial expressions of emotion.
Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit
2011-04-20
This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten
2017-06-01
The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P < 0.01). The influx rate (Ki) was higher in prostate cancer than in reference prostate gland (0.055 [r = 0.998] vs 0.017 [r = 0.996]). Primary prostate cancer is readily identified on early dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.
Visualizing topography: Effects of presentation strategy, gender, and spatial ability
NASA Astrophysics Data System (ADS)
McAuliffe, Carla
2003-10-01
This study investigated the effect of different presentation strategies (2-D static visuals, 3-D animated visuals, and 3-D interactive, animated visuals) and gender on achievement, time-spent-on visual treatment, and attitude during a computer-based science lesson about reading and interpreting topographic maps. The study also examined the relationship of spatial ability and prior knowledge to gender, achievement, and time-spent-on visual treatment. Students enrolled in high school chemistry-physics were pretested and given two spatial ability tests. They were blocked by gender and randomly assigned to one of three levels of presentation strategy or the control group. After controlling for the effects of spatial ability and prior knowledge with analysis of covariance, three significant differences were found between the versions: (a) the 2-D static treatment group scored significantly higher on the posttest than the control group; (b) the 3-D animated treatment group scored significantly higher on the posttest than the control group; and (c) the 2-D static treatment group scored significantly higher on the posttest than the 3-D interactive animated treatment group. Furthermore, the 3-D interactive animated treatment group spent significantly more time on the visual screens than the 2-D static treatment group. Analyses of student attitudes revealed that most students felt the landform visuals in the computer-based program helped them learn, but not in a way they would describe as fun. Significant differences in attitude were found by treatment and by gender. In contrast to findings from other studies, no gender differences were found on either of the two spatial tests given in this study. Cognitive load, cognitive involvement, and solution strategy are offered as three key factors that may help explain the results of this study. Implications for instructional design include suggestions about the use of 2-D static, 3-D animated and 3-D interactive animations as well as a recommendation about the inclusion of pretests in similar instructional programs. Areas for future research include investigating the effects of combinations of presentation strategies, continuing to examine the role of spatial ability in science achievement, and gaining cognitive insights about what it is that students do when learning to read and interpret topographic maps.
Maximizing Impact: Pairing interactive web visualizations with traditional print media
NASA Astrophysics Data System (ADS)
Read, E. K.; Appling, A.; Carr, L.; De Cicco, L.; Read, J. S.; Walker, J. I.; Winslow, L. A.
2016-12-01
Our Nation's rapidly growing store of environmental data makes new demands on researchers: to take on increasingly broad-scale, societally relevant analyses and to rapidly communicate findings to the public. Interactive web-based data visualizations now commonly supplement or comprise journalism, and science journalism has followed suit. To maximize the impact of US Geological Survey (USGS) science, the USGS Office of Water Information Data Science team builds tools and products that combine traditional static research products (e.g., print journal articles) with web-based, interactive data visualizations that target non-scientific audiences. We developed a lightweight, open-source framework for web visualizations to reduce time to production. The framework provides templates for a data visualization workflow and the packaging of text, interactive figures, and images into an appealing web interface with standardized look and feel, usage tracking, and responsiveness. By partnering with subject matter experts to focus on timely, societally relevant issues, we use these tools to produce appealing visual stories targeting specific audiences, including managers, the general public, and scientists, on diverse topics including drought, microplastic pollution, and fisheries response to climate change. We will describe the collaborative and technical methodologies used; describe some examples of how it's worked; and challenges and opportunities for the future.
Breaking camouflage and detecting targets require optic flow and image structure information.
Pan, Jing Samantha; Bingham, Ned; Chen, Chang; Bingham, Geoffrey P
2017-08-01
Use of motion to break camouflage extends back to the Cambrian [In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution (New York Basic Books, 2003)]. We investigated the ability to break camouflage and continue to see camouflaged targets after motion stops. This is crucial for the survival of hunting predators. With camouflage, visual targets and distracters cannot be distinguished using only static image structure (i.e., appearance). Motion generates another source of optical information, optic flow, which breaks camouflage and specifies target locations. Optic flow calibrates image structure with respect to spatial relations among targets and distracters, and calibrated image structure makes previously camouflaged targets perceptible in a temporally stable fashion after motion stops. We investigated this proposal using laboratory experiments and compared how many camouflaged targets were identified either with optic flow information alone or with combined optic flow and image structure information. Our results show that the combination of motion-generated optic flow and target-projected image structure information yielded efficient and stable perception of camouflaged targets.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Gesture-Controlled Interfaces for Self-Service Machines
NASA Technical Reports Server (NTRS)
Cohen, Charles J.; Beach, Glenn
2006-01-01
Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.
Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders
2012-01-01
Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284
Quantifying hypoxia in human cancers using static PET imaging.
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G; Milosevic, Michael; Hedley, David W; Jaffray, David A
2016-11-21
Compared to FDG, the signal of 18 F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties-well-perfused without substantial necrosis or partitioning-for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in 'inter-corporal' transport properties-blood volume and clearance rate-as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3 , a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
Ventura-Ríos, Lucio; Hernández-Díaz, Cristina; Ferrusquia-Toríz, Diana; Cruz-Arenas, Esteban; Rodríguez-Henríquez, Pedro; Alvarez Del Castillo, Ana Laura; Campaña-Parra, Alfredo; Canul, Efrén; Guerrero Yeo, Gerardo; Mendoza-Ruiz, Juan Jorge; Pérez Cristóbal, Mario; Sicsik, Sandra; Silva Luna, Karina
2017-12-01
This study aims to test the reliability of ultrasound to graduate synovitis in static and video images, evaluating separately grayscale and power Doppler (PD), and combined. Thirteen trained rheumatologist ultrasonographers participated in two separate rounds reading 42 images, 15 static and 27 videos, of the 7-joint count [wrist, 2nd and 3rd metacarpophalangeal (MCP), 2nd and 3rd interphalangeal (IPP), 2nd and 5th metatarsophalangeal (MTP) joints]. The images were from six patients with rheumatoid arthritis, performed by one ultrasonographer. Synovitis definition was according to OMERACT. Scoring system in grayscale, PD separately, and combined (GLOESS-Global OMERACT-EULAR Score System) were reviewed before exercise. Reliability intra- and inter-reading was calculated with Cohen's kappa weighted, according to Landis and Koch. Kappa values for inter-reading were good to excellent. The minor kappa was for GLOESS in static images, and the highest was for the same scoring in videos (k 0.59 and 0.85, respectively). Excellent values were obtained for static PD in 5th MTP joint and for PD video in 2nd MTP joint. Results for GLOESS in general were good to moderate. Poor agreement was observed in 3rd MCP and 3rd IPP in all kinds of images. Intra-reading agreement were greater in grayscale and GLOESS in static images than in videos (k 0.86 vs. 0.77 and k 0.86 vs. 0.71, respectively), but PD was greater in videos than in static images (k 1.0 vs. 0.79). The reliability of the synovitis scoring through static images and videos is in general good to moderate when using grayscale and PD separately or combined.
Quantifying hypoxia in human cancers using static PET imaging
NASA Astrophysics Data System (ADS)
Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G.; Milosevic, Michael; Hedley, David W.; Jaffray, David A.
2016-11-01
Compared to FDG, the signal of 18F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties—well-perfused without substantial necrosis or partitioning—for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in ‘inter-corporal’ transport properties—blood volume and clearance rate—as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.
Whiplash injuries: is there a role for imaging?
Van Geothem, J W; Biltjes, I G; van den Hauwe, L; Parizel, P M; De Schepper, A M
1996-03-01
Whiplash describes the manner in which a head is moved suddenly to produce a sprain in the neck and typically occurs after rear-end automobile collisions. It is one of the most common mechanisms of injury to the cervical spine. Although considered by some to be a form of compensation neurosis, evidence suggests that whiplash injuries are real and that they are a potential cause of significant impairment. Symptoms of cervical whiplash injury include neck pain and stiffness, interscapular pain, arm pain and/or occipital headache, and many whiplash patients have persistent complaints. Cervical roentgenography and conventional or computed tomography (CT) may show dislocations, subluxations and fractures in severely traumatized patients, but often fail to determine or visualize the cause for a whiplash syndrome. Magnetic resonance imaging (MRI), however, is able to assess different types of soft-tissue lesions related to whiplash injuries. Dynamic imaging may show functional disturbances. More widespread use of flexion/extension views, high-resolution static MRI and especially dynamic MRI should improve the correlation between imaging findings and patients' complaints.
Dynamic placement of plasmonic hotspots for super-resolution surface-enhanced Raman scattering.
Ertsgaard, Christopher T; McKoskey, Rachel M; Rich, Isabel S; Lindquist, Nathan C
2014-10-28
In this paper, we demonstrate dynamic placement of locally enhanced plasmonic fields using holographic laser illumination of a silver nanohole array. To visualize these focused "hotspots", the silver surface was coated with various biological samples for surface-enhanced Raman spectroscopy (SERS) imaging. Due to the large field enhancements, blinking behavior of the SERS hotspots was observed and processed using a stochastic optical reconstruction microscopy algorithm enabling super-resolution localization of the hotspots to within 10 nm. These hotspots were then shifted across the surface in subwavelength (<100 nm for a wavelength of 660 nm) steps using holographic illumination from a spatial light modulator. This created a dynamic imaging and sensing surface, whereas static illumination would only have produced stationary hotspots. Using this technique, we also show that such subwavelength shifting and localization of plasmonic hotspots has potential for imaging applications. Interestingly, illuminating the surface with randomly shifting SERS hotspots was sufficient to completely fill in a wide field of view for super-resolution chemical imaging.
Low-cost telepresence for collaborative virtual environments.
Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee
2007-01-01
We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.
Zhang, Di; Sessa, Salvatore; Kong, Weisheng; Cosentino, Sarah; Magistro, Daniele; Ishii, Hiroyuki; Zecca, Massimiliano; Takanishi, Atsuo
2015-11-01
Current training for laparoscopy focuses only on the enhancement of manual skill and does not give advice on improving trainees' posture. However, a poor posture can result in increased static muscle loading, faster fatigue, and impaired psychomotor task performance. In this paper, the authors propose a method, named subliminal persuasion, which gives the trainee real-time advice for correcting the upper limb posture during laparoscopic training like the expert but leads to a lower increment in the workload. A 9-axis inertial measurement unit was used to compute the upper limb posture, and a Detection Reaction Time device was developed and used to measure the workload. A monitor displayed not only images from laparoscope, but also a visual stimulus, a transparent red cross superimposed to the laparoscopic images, when the trainee had incorrect upper limb posture. One group was exposed, when their posture was not correct during training, to a short (about 33 ms) subliminal visual stimulus. The control group instead was exposed to longer (about 660 ms) supraliminal visual stimuli. We found that subliminal visual stimulation is a valid method to improve trainees' upper limb posture during laparoscopic training. Moreover, the additional workload required for subconscious processing of subliminal visual stimuli is less than the one required for supraliminal visual stimuli, which is processed instead at the conscious level. We propose subliminal persuasion as a method to give subconscious real-time stimuli to improve upper limb posture during laparoscopic training. Its effectiveness and efficiency were confirmed against supraliminal stimuli transmitted at the conscious level: Subliminal persuasion improved upper limb posture of trainees, with a smaller increase on the overall workload.
Remote histology learning from static versus dynamic microscopic images.
Mione, Sylvia; Valcke, Martin; Cornelissen, Maria
2016-05-06
Histology is the study of microscopic structures in normal tissue sections. Curriculum redesign in medicine has led to a decrease in the use of optical microscopes during practical classes. Other imaging solutions have been implemented to facilitate remote learning. With advancements in imaging technologies, learning material can now be digitized. Digitized microscopy images can be presented in either a static or dynamic format. This study of remote histology education identifies whether dynamic pictures are superior to static images for the acquisition of histological knowledge. Test results of two cohorts of second-year Bachelor in Medicine students at Ghent University were analyzed in two consecutive academic years: Cohort 1 (n = 190) and Cohort 2 (n = 174). Students in Cohort 1 worked with static images whereas students in Cohort 2 were presented with dynamic images. ANCOVA was applied to study differences in microscopy performance scores between the two cohorts, taking into account any possible initial differences in prior knowledge. The results show that practical histology scores are significantly higher with dynamic images as compared to static images (F (1,361) = 15.14, P < 0.01), regardless of student's gender and performance level. Several reasons for this finding can be explained in accordance with cognitivist learning theory. Since the findings suggest that knowledge construction with dynamic pictures is stronger as compared to static images, dynamic images should be introduced in a remote setting for microscopy education. Further implementation within a larger electronic learning management system needs to be explored in future research. Anat Sci Educ 9: 222-230. © 2015 American Association of Anatomists. © 2015 American Association of Anatomists.
Takacs, Zsofia K.; Bus, Adriana G.
2016-01-01
The present study provides experimental evidence regarding 4–6-year-old children’s visual processing of animated versus static illustrations in storybooks. Thirty nine participants listened to an animated and a static book, both three times, while eye movements were registered with an eye-tracker. Outcomes corroborate the hypothesis that specifically motion is what attracts children’s attention while looking at illustrations. It is proposed that animated illustrations that are well matched to the text of the story guide children to those parts of the illustration that are important for understanding the story. This may explain why animated books resulted in better comprehension than static books. PMID:27790183
Klop, D; Engelbrecht, L
2013-12-01
This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.
Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten
2013-01-01
A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974
Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake
NASA Astrophysics Data System (ADS)
Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.
2011-12-01
It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not just for young scientists but also for people of all ages.
Stereoscopy in Static Scientific Imagery in an Informal Education Setting: Does It Matter?
NASA Astrophysics Data System (ADS)
Price, C. Aaron; Lee, H.-S.; Malatesta, K.
2014-12-01
Stereoscopic technology (3D) is rapidly becoming ubiquitous across research, entertainment and informal educational settings. Children of today may grow up never knowing a time when movies, television and video games were not available stereoscopically. Despite this rapid expansion, the field's understanding of the impact of stereoscopic visualizations on learning is rather limited. Much of the excitement of stereoscopic technology could be due to a novelty effect, which will wear off over time. This study controlled for the novelty factor using a variety of techniques. On the floor of an urban science center, 261 children were shown 12 photographs and visualizations of highly spatial scientific objects and scenes. The images were randomly shown in either traditional (2D) format or in stereoscopic format. The children were asked two questions of each image—one about a spatial property of the image and one about a real-world application of that property. At the end of the test, the child was asked to draw from memory the last image they saw. Results showed no overall significant difference in response to the questions associated with 2D or 3D images. However, children who saw the final slide only in 3D drew more complex representations of the slide than those who did not. Results are discussed through the lenses of cognitive load theory and the effect of novelty on engagement.
Visually-induced reorientation illusions as a function of age.
Howard, I P; Jenkin, H L; Hu, G
2000-09-01
We reported previously that supine subjects inside a furnished room who are tilted 90 degrees may experience themselves and the room as upright to gravity. We call this the levitation illusion because it creates sensations similar to those experienced in weightlessness. It is an example of a larger class of novel static reorientation illusions that we have explored. Stationary subjects inside a furnished room rotating about a horizontal axis experience complete self rotation about the roll or pitch axis. We call this a dynamic reorientation illusion. We have determined the incidence of static and dynamic reorientation illusions in subjects ranging in age from 9 to 78 yr. Some 90% of subjects of all ages experienced the dynamic reorientation illusion but the percentage of subjects experiencing static reorientation illusions increased with age. We propose that the dynamic illusion depends on a primitive mechanism of visual-vestibular interaction but that static reorientation illusions depend on learned visual cues to the vertical arising from the perceived tops and bottoms of familiar objects and spatial relationships between objects. Older people become more dependent on visual polarity to compensate for loss in vestibular sensitivity. Of 9 astronauts, 4 experienced the levitation illusion. The relationship between susceptibility to reorientation illusions on Earth and in space has still to be determined. We propose that the Space Station will be less disorienting if pictures of familiar objects line the walls.
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Schröter, Tobias J.; Johnson, Shane B.; John, Kerstin; Santi, Peter A.
2011-01-01
We report replacement of one side of a static illumination, dual sided, thin-sheet laser imaging microscope (TSLIM) with an intensity modulated laser scanner in order to implement structured illumination (SI) and HiLo image demodulation techniques for background rejection. The new system is equipped with one static and one scanned light-sheet and is called a scanning thin-sheet laser imaging microscope (sTSLIM). It is an optimized version of a light-sheet fluorescent microscope that is designed to image large specimens (<15 mm in diameter). In this paper we describe the hardware and software modifications to TSLIM that allow for static and uniform light-sheet illumination with SI and HiLo image demodulation. The static light-sheet has a thickness of 3.2 µm; whereas, the scanned side has a light-sheet thickness of 4.2 µm. The scanned side images specimens with subcellular resolution (<1 µm lateral and <4 µm axial resolution) with a size up to 15 mm. SI and HiLo produce superior contrast compared to both the uniform static and scanned light-sheets. HiLo contrast was greater than SI and is faster and more robust than SI because as it produces images in two-thirds of the time and exhibits fewer intensity streaking artifacts. PMID:22254177
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma
Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E.; Bollinger, Kathryn; Devos, Hannes
2017-01-01
Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups (p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3) 3 (2–6.50) vs. controls: 2 (0.50–2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6) vs. controls: 1 (0.50–2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma. PMID:28912712
Imaging mycobacterial growth and division with a fluorogenic probe.
Hodges, Heather L; Brown, Robert A; Crooks, John A; Weibel, Douglas B; Kiessling, Laura L
2018-05-15
Control and manipulation of bacterial populations requires an understanding of the factors that govern growth, division, and antibiotic action. Fluorescent and chemically reactive small molecule probes of cell envelope components can visualize these processes and advance our knowledge of cell envelope biosynthesis (e.g., peptidoglycan production). Still, fundamental gaps remain in our understanding of the spatial and temporal dynamics of cell envelope assembly. Previously described reporters require steps that limit their use to static imaging. Probes that can be used for real-time imaging would advance our understanding of cell envelope construction. To this end, we synthesized a fluorogenic probe that enables continuous live cell imaging in mycobacteria and related genera. This probe reports on the mycolyltransferases that assemble the mycolic acid membrane. This peptidoglycan-anchored bilayer-like assembly functions to protect these cells from antibiotics and host defenses. Our probe, quencher-trehalose-fluorophore (QTF), is an analog of the natural mycolyltransferase substrate. Mycolyltransferases process QTF by diverting their normal transesterification activity to hydrolysis, a process that unleashes fluorescence. QTF enables high contrast continuous imaging and the visualization of mycolyltransferase activity in cells. QTF revealed that mycolyltransferase activity is augmented before cell division and localized to the septa and cell poles, especially at the old pole. This observed localization suggests that mycolyltransferases are components of extracellular cell envelope assemblies, in analogy to the intracellular divisomes and polar elongation complexes. We anticipate QTF can be exploited to detect and monitor mycobacteria in physiologically relevant environments.
Visualizing 3D data obtained from microscopy on the Internet.
Pittet, J J; Henn, C; Engel, A; Heymann, J B
1999-01-01
The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.
NASA Astrophysics Data System (ADS)
Martinez, P.; Kasper, M.; Costille, A.; Sauvage, J. F.; Dohlen, K.; Puget, P.; Beuzit, J. L.
2013-06-01
Context. Observing sequences have shown that the major noise source limitation in high-contrast imaging is the presence of quasi-static speckles. The timescale on which quasi-static speckles evolve is determined by various factors, mechanical or thermal deformations, among others. Aims: Understanding these time-variable instrumental speckles and, especially, their interaction with other aberrations, referred to as the pinning effect, is paramount for the search for faint stellar companions. The temporal evolution of quasi-static speckles is, for instance, required for quantifying the gain expected when using angular differential imaging (ADI) and to determining the interval on which speckle nulling techniques must be carried out. Methods: Following an early analysis of a time series of adaptively corrected, coronagraphic images obtained in a laboratory condition with the high-order test bench (HOT) at ESO Headquarters, we confirm our results with new measurements carried out with the SPHERE instrument during its final test phase in Europe. The analysis of the residual speckle pattern in both direct and differential coronagraphic images enables the characterization of the temporal stability of quasi-static speckles. Data were obtained in a thermally actively controlled environment reproducing realistic conditions encountered at the telescope. Results: The temporal evolution of the quasi-static wavefront error exhibits a linear power law, which can be used to model quasi-static speckle evolution in the context of forthcoming high-contrast imaging instruments, with implications for instrumentation (design, observing strategies, data reduction). Such a model can be used for instance to derive the timescale on which non-common path aberrations must be sensed and corrected. We found in our data that quasi-static wavefront error increases with ~0.7 Å per minute.
A bio-inspired method and system for visual object-based attention and segmentation
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak
2010-04-01
This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.
Risk Management Collaboration through Sharing Interactive Graphics
NASA Astrophysics Data System (ADS)
Slingsby, Aidan; Dykes, Jason; Wood, Jo; Foote, Matthew
2010-05-01
Risk management involves the cooperation of scientists, underwriters and actuaries all of whom analyse data to support decision-making. Results are often disseminated through static documents with graphics that convey the message the analyst wishes to communicate. Interactive graphics are increasingly popular means of communicating the results of data analyses because they enable other parties to explore and visually analyse some of the data themselves prior to and during discussion. Discussion around interactive graphics can occur synchronously in face-to-face meetings or with video-conferencing and screen sharing or they can occur asynchronously through web-sites such as ManyEyes, web-based fora, blogs, wikis and email. A limitation of approaches that do not involve screen sharing is the difficulty in sharing the results of insights from interacting with the graphic. Static images accompanied can be shared but these themselves cannot be interacted, producing a discussion bottleneck (Baker, 2008). We address this limitation by allowing the state and configuration of graphics to be shared (rather than static images) so that a user can reproduce someone else's graphic, interact with it and then share the results of this accompanied with some commentary. HiVE (Slingsby et al, 2009) is a compact and intuitive text-based language that has been designed for this purpose. We will describe the vizTweets project (a 9-month project funded by JISC) in which we are applying these principles to insurance risk management in the context of the Willis Research Network, the world's largest collaboration between the insurance industry and the academia). The project aims to extend HiVE to meet the needs of the sector, design, implement free-available web services and tools and to provide case studies. We will present a case study that demonstrate the potential of this approach for collaboration within the Willis Research Network. Baker, D. Towards Transparency in Visualisation Based Research. AHRC ICT Methods Network Expert Workshop. Available at http://www.viznet.ac.uk/documents Slingsby, A., Dykes, J. and Wood, J. 2009. Configuring Hierarchical Layouts to Address Research Questions. IEEE Transactions on Visualization and Computer Graphics 15 (6), Nov-Dec 2009, pp977-984.
Patel, Dipesh E; Cumberland, Phillippa M; Walters, Bronwen C; Russell-Eggitt, Isabelle; Brookes, John; Papadopoulos, Maria; Khaw, Peng Tee; Viswanathan, Ananth C; Garway-Heath, David; Cortina-Borja, Mario; Rahi, Jugnoo S
2018-02-01
There is limited evidence to support the development of guidance for visual field testing in children with glaucoma. To compare different static and combined static/kinetic perimetry approaches in children with glaucoma. Cross-sectional, observational study recruiting children prospectively between May 2013 and June 2015 at 2 tertiary specialist pediatric ophthalmology centers in London, England (Moorfields Eye Hospital and Great Ormond Street Hospital). The study included 65 children aged 5 to 15 years with glaucoma (108 affected eyes). A comparison of test quality and outcomes for static and combined static/kinetic techniques, with respect to ability to quantify glaucomatous loss. Children performed perimetric assessments using Humphrey static (Swedish Interactive Thresholding Algorithm 24-2 FAST) and Octopus combined static tendency-oriented perimetry/kinetic perimetry (isopter V4e, III4e, or I4e) in a single sitting, using standardized clinical protocols, administered by a single examiner. Information was collected about test duration, completion, and quality (using automated reliability indices and our qualitative Examiner-Based Assessment of Reliability score). Perimetry outputs were scored using the Aulhorn and Karmeyer classification. One affected eye in 19 participants was retested with Swedish Interactive Thresholding Algorithm 24-2 FAST and 24-2 standard algorithms. Sixty-five children (33 girls [50.8%]), with a median age of 12 years (interquartile range, 9-14 years), were tested. Test quality (Examiner-Based Assessment of Reliability score) improved with increasing age for both Humphrey and Octopus strategies and were equivalent in children older than 10 years (McNemar test, χ2 = 0.33; P = .56), but better-quality tests with Humphrey perimetry were achieved in younger children (McNemar test, χ2 = 4.0; P = .05). Octopus and Humphrey static MD values worse than or equal to -6 dB showed disagreement (Bland-Altman, mean difference, -0.70; limit of agreement, -7.74 to 6.35) but were comparable when greater than this threshold (mean difference, -0.03; limit of agreement, -2.33 to 2.27). Visual field classification scores for static perimetry tests showed substantial agreement (linearly weighted κ, 0.79; 95% CI, 0.65-0.93), although 25 of 80 (31%) were graded with a more severe defect for Octopus static perimetry. Of the 7 severe cases of visual field loss (grade 5), 5 had lower kinetic than static classification scores. A simple static perimetry approach potentially yields high-quality results in children younger than 10 years. For children older than 10 years, without penalizing quality, the addition of kinetic perimetry enabled measurement of far-peripheral sensitivity, which is particularly useful in children with severe visual field restriction.
Visual supports for shared reading with young children: the effect of static overlay design.
Wood Jackson, Carla; Wahlquist, Jordan; Marquis, Cassandra
2011-06-01
This study examined the effects of two types of static overlay design (visual scene display and grid display) on 39 children's use of a speech-generating device during shared storybook reading with an adult. This pilot project included two groups: preschool children with typical communication skills (n = 26) and with complex communication needs (n = 13). All participants engaged in shared reading with two books using each visual layout on a speech-generating device (SGD). The children averaged a greater number of activations when presented with a grid display during introductory exploration and free play. There was a large effect of the static overlay design on the number of silent hits, evidencing more silent hits with visual scene displays. On average, the children demonstrated relatively few spontaneous activations of the speech-generating device while the adult was reading, regardless of overlay design. When responding to questions, children with communication needs appeared to perform better when using visual scene displays, but the effect of display condition on the accuracy of responses to wh-questions was not statistically significant. In response to an open ended question, children with communication disorders demonstrated more frequent activations of the SGD using a grid display than a visual scene. Suggestions for future research as well as potential implications for designing AAC systems for shared reading with young children are discussed.
Lemay, Jean-François; Gagnon, Dany; Duclos, Cyril; Grangeon, Murielle; Gauthier, Cindy; Nadeau, Sylvie
2013-06-01
Postural steadiness while standing is impaired in individuals with spinal cord injury (SCI) and could be potentially associated with increased reliance on visual inputs. The purpose of this study was to compare individuals with SCI and able-bodied participants on their use of visual inputs to maintain standing postural steadiness. Another aim was to quantify the association between visual contribution to achieve postural steadiness and a clinical balance scale. Individuals with SCI (n = 15) and able-bodied controls (n = 14) performed quasi-static stance, with eyes open or closed, on force plates for two 45 s trials. Measurements of the centre of pressure (COP) included the mean value of the root mean square (RMS), mean COP velocity (MV) and COP sway area (SA). Individuals with SCI were also evaluated with the Mini-Balance Evaluation Systems Test (Mini BESTest), a clinical outcome measure of postural steadiness. Individuals with SCI were significantly less stable than able-bodied controls in both conditions. The Romberg ratios (eyes open/eyes closed) for COP MV and SA were significantly higher for individuals with SCI, indicating a higher contribution of visual inputs for postural steadiness in that population. Romberg ratios for RMS and SA were significantly associated with the Mini-BESTest. This study highlights the contribution of visual inputs in individuals with SCI when maintaining quasi-static standing posture. Copyright © 2012 Elsevier B.V. All rights reserved.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Anniversary Paper: Image processing and manipulation through the pages of Medical Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armato, Samuel G. III; Ginneken, Bram van; Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, Room Q0S.459, 3584 CX Utrecht
The language of radiology has gradually evolved from ''the film'' (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to ''the image,'' an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatialmore » alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the ''image'' as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.« less
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Effects of translational and rotational motions and display polarity on visual performance.
Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe
2008-10-01
This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.
Illusory visual motion stimulus elicits postural sway in migraine patients
Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi
2015-01-01
Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832
Neural correlates of auditory short-term memory in rostral superior temporal cortex
Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo
2014-01-01
Summary Background Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. Results We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed-match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing, and in their resistance to sounds intervening between the sample and match. Conclusions Like the monkeys’ behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. PMID:25456448
Exciplex Fluorescence Systems for Two-Phase Visualization.
NASA Astrophysics Data System (ADS)
Kim, J.-U.; Golding, B.; Schock, H. J.; Nocera, D. G.; Keller, P.
1996-03-01
We report the development of diagnostic chemical systems for vapor-liquid visualization based on an exciplex (excited state complex) formed between dimethyl- or diethyl-substituted aniline and trimethyl-substituted naphthalenes. Quantum yields of individual monomers were measured and the exciplex emission spectra as well as fluorescence quenching mechanisms were analyzed. Quenching occurs by both static and dynamic mechanisms. Among the many formulations investigated in this study, a system consisting of 7% 1,4,6-trimethylnaphthalene (1,4,6-TMN) and 5% N,N-dimethylaniline (DMA) in 88% isooctane exciplex was found to be useful for the laser- induced fluorescence technique. The technique is expected to find application in observing mixture formation in diesel or spark ignition engines with spectrally well-separated fluorescence images obtained from the monomer and exciplex constituents dissolved in the gasoline fuel. *Supported by NSF MRSEC DMR-9400417 and the Center for Fundamental Materials Research.
Magnitude of visual accommodation to a head-up display
NASA Technical Reports Server (NTRS)
Leitner, E. F.; Haines, R. F.
1981-01-01
The virtual image symbology of head-up displays (HUDs) is presented at optical infinity to the pilot. This design feature is intended to help pilots maintain visual focus distance at optical infinity. However, the accommodation response could be nearer than optical infinity, due to an individual's dark focus response. Accommodation responses were measured of two age groups of airline pilots to: (1) static symbology on a HUD; (2) a landing site background at optical infinity; (3) the combination of the HUD symbology and the landing site background; and (4) complete darkness. Results indicate that magnitude of accommodation to HUD symbology, with and without the background, is not significantly different from an infinity focus response for either age group. The dark focus response is significantly closer than optical infinity for the younger pilots, but not the older pilots, a finding consistent with previous research.
Scientific Visualization Made Easy for the Scientist
NASA Astrophysics Data System (ADS)
Westerhoff, M.; Henderson, B.
2002-12-01
amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.
Monitoring early response to chemoradiotherapy with 18F-FMISO dynamic PET in head and neck cancer.
Grkovski, Milan; Lee, Nancy Y; Schöder, Heiko; Carlin, Sean D; Beattie, Bradley J; Riaz, Nadeem; Leeman, Jonathan E; O'Donoghue, Joseph A; Humm, John L
2017-09-01
There is growing recognition that biologic features of the tumor microenvironment affect the response to cancer therapies and the outcome of cancer patients. In head and neck cancer (HNC) one such feature is hypoxia. We investigated the utility of 18 F-fluoromisonidazole (FMISO) dynamic positron emission tomography (dPET) for monitoring the early microenvironmental response to chemoradiotherapy in HNC. Seventy-two HNC patients underwent FMISO dPET scans in a customized immobilization mask (0-30 min dynamic acquisition, followed by 10 min static acquisitions starting at ∼95 min and ∼160 min post-injection) at baseline and early into treatment where patients have already received one cycle of chemotherapy and anywhere from five to ten fractions of 2 Gy per fraction radiation therapy. Voxelwise pharmacokinetic modeling was conducted using an irreversible one-plasma two-tissue compartment model to calculate surrogate biomarkers of tumor hypoxia (k 3 and Tumor-to-Blood Ratio (TBR)), perfusion (K 1 ) and FMISO distribution volume (DV). Additionally, Tumor-to-Muscle Ratios (TMR) were derived by visual inspection by an experienced nuclear medicine physician, with TMR > 1.2 defining hypoxia. One hundred and thirty-five lesions in total were analyzed. TBR, k 3 and DV decreased on early response scans, while no significant change was observed for K 1 . The k 3 -TBR correlation decreased substantially from baseline scans (Pearson's r = 0.72 and 0.76 for mean intratumor and pooled voxelwise values, respectively) to early response scans (Pearson's r = 0.39 and 0.40, respectively). Both concordant and discordant examples of changes in intratumor k 3 and TBR were identified; the latter partially mediated by the change in DV. In 13 normoxic patients according to visual analysis (all having lesions with TMR = 1.2), subvolumes were identified where k 3 indicated the presence of hypoxia. Pharmacokinetic modeling of FMISO dynamic PET reveals a more detailed characterization of the tumor microenvironment and assessment of response to chemoradiotherapy in HNC patients than a single static image does. In a clinical trial where absence of hypoxia in primary tumor and lymph nodes would lead to de-escalation of therapy, the observed disagreement between visual analysis and pharmacokinetic modeling results would have affected patient management in <20% cases. While simple static PET imaging is easily implemented for clinical trials, the clinical applicability of pharmacokinetic modeling remains to be investigated.
Multi-modal anatomical optical coherence tomography and CT for in vivo dynamic upper airway imaging
NASA Astrophysics Data System (ADS)
Balakrishnan, Santosh; Bu, Ruofei; Price, Hillel; Zdanski, Carlton; Oldenburg, Amy L.
2017-02-01
We describe a novel, multi-modal imaging protocol for validating quantitative dynamic airway imaging performed using anatomical Optical Coherence Tomography (aOCT). The aOCT system consists of a catheter-based aOCT probe that is deployed via a bronchoscope, while a programmable ventilator is used to control airway pressure. This setup is employed on the bed of a Siemens Biograph CT system capable of performing respiratory-gated acquisitions. In this arrangement the position of the aOCT catheter may be visualized with CT to aid in co-registration. Utilizing this setup we investigate multiple respiratory pressure parameters with aOCT, and respiratory-gated CT, on both ex vivo porcine trachea and live, anesthetized pigs. This acquisition protocol has enabled real-time measurement of airway deformation with simultaneous measurement of pressure under physiologically relevant static and dynamic conditions- inspiratory peak or peak positive airway pressures of 10-40 cm H2O, and 20-30 breaths per minute for dynamic studies. We subsequently compare the airway cross sectional areas (CSA) obtained from aOCT and CT, including the change in CSA at different stages of the breathing cycle for dynamic studies, and the CSA at different peak positive airway pressures for static studies. This approach has allowed us to improve our acquisition methodology and to validate aOCT measurements of the dynamic airway for the first time. We believe that this protocol will prove invaluable for aOCT system development and greatly facilitate translation of OCT systems for airway imaging into the clinical setting.
Live imaging of mouse secondary palate fusion
Kim, Seungil; Prochazka, Jan; Bush, Jeffrey O.
2017-01-01
LONG ABSTRACT The fusion of the secondary palatal shelves to form the intact secondary palate is a key process in mammalian development and its disruption can lead to cleft secondary palate, a common congenital anomaly in humans. Secondary palate fusion has been extensively studied leading to several proposed cellular mechanisms that may mediate this process. However, these studies have been mostly performed on fixed embryonic tissues at progressive timepoints during development or in fixed explant cultures analyzed at static timepoints. Static analysis is limited for the analysis of dynamic morphogenetic processes such a palate fusion and what types of dynamic cellular behaviors mediate palatal fusion is incompletely understood. Here we describe a protocol for live imaging of ex vivo secondary palate fusion in mouse embryos. To examine cellular behaviors of palate fusion, epithelial-specific Keratin14-cre was used to label palate epithelial cells in ROSA26-mTmGflox reporter embryos. To visualize filamentous actin, Lifeact-mRFPruby reporter mice were used. Live imaging of secondary palate fusion was performed by dissecting recently-adhered secondary palatal shelves of embryonic day (E) 14.5 stage embryos and culturing in agarose-containing media on a glass bottom dish to enable imaging with an inverted confocal microscope. Using this method, we have detected a variety of novel cellular behaviors during secondary palate fusion. An appreciation of how distinct cell behaviors are coordinated in space and time greatly contributes to our understanding of this dynamic morphogenetic process. This protocol can be applied to mutant mouse lines, or cultures treated with pharmacological inhibitors to further advance understanding of how secondary palate fusion is controlled. PMID:28784960
Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P
1997-11-01
In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.
A virtual simulator designed for collision prevention in proton therapy.
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho
2015-10-01
In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.
Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story
Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.
2011-01-01
Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491
NASA Astrophysics Data System (ADS)
Oh, Seung-Won; Baek, Jong-Min; Kim, Jung-Wook; Yoon, Tae-Hoon
2016-09-01
Two types of image flicker, which are caused by the flexoelectric effect of liquid crystals (LCs), are observed when a fringe-field switching (FFS) LC cell is driven by a low frequency electric field. Static image flicker, observed because of the transmittance difference between neighboring frames, has been reported previously. On the other hand, research on dynamic image flicker has been minimal until now. Dynamic image flicker is noticeable because of the brief transmittance drop when the sign of the applied voltage is reversed. We investigated the dependence of the image flicker in an FFS LC cell on dielectric anisotropy of the LCs in terms of both the static and dynamic flicker. Experimental results show that small dielectric anisotropy of the LC can help suppress not only the static but also dynamic flicker for positive LCs. We found that both the static and dynamic flicker in negative LCs is less evident than in positive LCs.
Schröter, Tobias J; Johnson, Shane B; John, Kerstin; Santi, Peter A
2012-01-01
We report replacement of one side of a static illumination, dual sided, thin-sheet laser imaging microscope (TSLIM) with an intensity modulated laser scanner in order to implement structured illumination (SI) and HiLo image demodulation techniques for background rejection. The new system is equipped with one static and one scanned light-sheet and is called a scanning thin-sheet laser imaging microscope (sTSLIM). It is an optimized version of a light-sheet fluorescent microscope that is designed to image large specimens (<15 mm in diameter). In this paper we describe the hardware and software modifications to TSLIM that allow for static and uniform light-sheet illumination with SI and HiLo image demodulation. The static light-sheet has a thickness of 3.2 µm; whereas, the scanned side has a light-sheet thickness of 4.2 µm. The scanned side images specimens with subcellular resolution (<1 µm lateral and <4 µm axial resolution) with a size up to 15 mm. SI and HiLo produce superior contrast compared to both the uniform static and scanned light-sheets. HiLo contrast was greater than SI and is faster and more robust than SI because as it produces images in two-thirds of the time and exhibits fewer intensity streaking artifacts. 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.
1988-06-01
Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
Impact of Multimedia and Network Services on an Introductory Level Course
NASA Technical Reports Server (NTRS)
Russ, John C.
1996-01-01
We will demonstrate and describe the impact of our use of multimedia and network connectivity on a sophomore-level introductory course in materials science. This class services all engineering students, resulting in large (more than 150) class sections with no hands-on laboratory. In 1990 we began to develop computer graphics that might substitute for some laboratory or real-world experiences, and demonstrate relationships hard to show with static textbook images or chalkboard drawings. We created a comprehensive series of modules that cover the entire course content. Called VIMS (Visualizations in Materials Science), these are available in the form of a CD-ROM and also via the internet.
Study of Injection of Helium into Supersonic Air Flow Using Rayleigh Scattering
NASA Technical Reports Server (NTRS)
Seaholtz, Richard G.; Buggele, Alvin E.
1997-01-01
A study of the transverse injection of helium into a Mach 3 crossflow is presented. Filtered Rayleigh scattering is used to measure penetration and helium mole fraction in the mixing region. The method is based on planar molecular Rayleigh scattering using an injection-seeded, frequency-doubled ND:YAG pulsed laser and a cooled CCD camera. The scattered light is filtered with an iodine absorption cell to suppress stray laser light. Preliminary data are presented for helium mole fraction and penetration. Flow visualization images obtained with a shadowgraph and wall static pressure data in the vicinity of the injection are also presented.
Beyond the static image: Tee Corinne's roles as a pioneering lesbian artist and art historian.
Snider, Stefanie
2013-01-01
While Tee Corinne has been widely recognized as a preeminent lesbian and feminist artist of the last forty years, little has been written about her as an artist or art historian in any substantial way. This article attempts to shed light on Corinne's investment in creating explicitly sexual lesbian visual art and art historical writings that put pressure on the categories of artist and art historian between the 1970s and early 2000s. Corinne's work manages to fulfill feminist ideals while also working outside of the norms set up in both the lesbian and mainstream realms of art and art history.
Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Miura, Naoki; Akitsuki, Yuko; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta
2006-10-01
Multiple brain networks may support visual self-recognition. It has been hypothesized that the left ventral occipito-temporal cortex processes one's own face as a symbol, and the right parieto-frontal network processes self-image in association with motion-action contingency. Using functional magnetic resonance imaging, we first tested these hypotheses based on the prediction that these networks preferentially respond to a static self-face and to moving one's whole body, respectively. Brain activation specifically related to self-image during familiarity judgment was compared across four stimulus conditions comprising a two factorial design: factor Motion contrasted picture (Picture) and movie (Movie), and factor Body part a face (Face) and whole body (Body). Second, we attempted to segregate self-specific networks using a principal component analysis (PCA), assuming an independent pattern of inter-subject variability in activation over the four stimulus conditions in each network. The bilateral ventral occipito-temporal and the right parietal and frontal cortices exhibited self-specific activation. The left ventral occipito-temporal cortex exhibited greater self-specific activation for Face than for Body, in Picture, consistent with the prediction for this region. The activation profiles of the right parietal and frontal cortices did not show preference for Movie Body predicted by the assumed roles of these regions. The PCA extracted two cortical networks, one with its peaks in the right posterior, and another in frontal cortices; their possible roles in visuo-spatial and conceptual self-representations, respectively, were suggested by previous findings. The results thus supported and provided evidence of multiple brain networks for visual self-recognition.
Structure Sensor for mobile markerless augmented reality
NASA Astrophysics Data System (ADS)
Kilgus, T.; Bux, R.; Franz, A. M.; Johnen, W.; Heim, E.; Fangerau, M.; Müller, M.; Yen, K.; Maier-Hein, L.
2016-03-01
3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8+/-1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.
Neurotechnology for intelligence analysts
NASA Astrophysics Data System (ADS)
Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.
2006-05-01
Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.
Model-based restoration using light vein for range-gated imaging systems.
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen
2016-09-10
The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.
Static Verification for Code Contracts
NASA Astrophysics Data System (ADS)
Fähndrich, Manuel
The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.
Visual Displays and Contextual Presentations in Computer-Based Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon
1998-01-01
Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…
Motion parallax in immersive cylindrical display systems
NASA Astrophysics Data System (ADS)
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
2012-03-01
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
Frequency-Domain Tomography for Single-shot, Ultrafast Imaging of Evolving Laser-Plasma Accelerators
NASA Astrophysics Data System (ADS)
Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Downer, Michael
2011-10-01
Intense laser pulses propagating through plasma create plasma wakefields that often evolve significantly, e.g. by expanding and contracting. However, such dynamics are known in detail only through intensive simulations. Laboratory visualization of evolving plasma wakes in the ``bubble'' regime is important for optimizing and scaling laser-plasma accelerators. Recently snap-shots of quasi-static wakes were recorded using frequency-domain holography (FDH). To visualize the wake's evolution, we have generalized FDH to frequency-domain tomography (FDT), which uses multiple probes propagating at different angles with respect to the pump pulse. Each probe records a phase streak, imprinting a partial record of the evolution of pump-created structures. We then topographically reconstruct the full evolution from all phase streaks. To prove the concept, a prototype experiment visualizing nonlinear index evolution in glass is demonstrated. Four probes propagating at 0, 0.6, 2, 14 degrees to the index ``bubble'' are angularly and temporally multiplexed to a single spectrometer to achieve cost-effective FDT. From these four phase streaks, an FDT algorithm analogous to conventional CT yields a single-shot movie of the pump's self-focusing dynamics.
Visualizations and Mental Models - The Educational Implications of GEOWALL
NASA Astrophysics Data System (ADS)
Rapp, D.; Kendeou, P.
2003-12-01
Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an effective tool for helping students to learn geological information (and potentially restructure their na‹ve conceptions of geologic principles).
Computer Skill Acquisition and Retention: The Effects of Computer-Aided Self-Explanation
ERIC Educational Resources Information Center
Chi, Tai-Yin
2016-01-01
This research presents an experimental study to determine to what extent computer skill learners can benefit from generating self-explanation with the aid of different computer-based visualization technologies. Self-explanation was stimulated with dynamic visualization (Screencast), static visualization (Screenshot), or verbal instructions only,…
ERIC Educational Resources Information Center
Ryoo, Kihyun; Linn, Marcia C.
2012-01-01
Dynamic visualizations have the potential to make abstract scientific phenomena more accessible and visible to students, but they can also be confusing and difficult to comprehend. This research investigates how dynamic visualizations, compared to static illustrations, can support middle school students in developing an integrated understanding of…
Kawano, Yoshihiro; Otsuka, Chino; Sanzo, James; Higgins, Christopher; Nirei, Tatsuo; Schilling, Tobias; Ishikawa, Takuji
2015-01-01
Microfluidics is used increasingly for engineering and biomedical applications due to recent advances in microfabrication technologies. Visualization of bubbles, tracer particles, and cells in a microfluidic device is important for designing a device and analyzing results. However, with conventional methods, it is difficult to observe the channel geometry and such particles simultaneously. To overcome this limitation, we developed a Darkfield Internal Reflection Illumination (DIRI) system that improved the drawbacks of a conventional darkfield illuminator. This study was performed to investigate its utility in the field of microfluidics. The results showed that the developed system could clearly visualize both microbubbles and the channel wall by utilizing brightfield and DIRI illumination simultaneously. The methodology is useful not only for static phenomena, such as clogging, but also for dynamic phenomena, such as the detection of bubbles flowing in a channel. The system was also applied to simultaneous fluorescence and DIRI imaging. Fluorescent tracer beads and channel walls were observed clearly, which may be an advantage for future microparticle image velocimetry (μPIV) analysis, especially near a wall. Two types of cell stained with different colors, and the channel wall, can be recognized using the combined confocal and DIRI system. Whole-slide imaging was also conducted successfully using this system. The tiling function significantly expands the observing area of microfluidics. The developed system will be useful for a wide variety of engineering and biomedical applications for the growing field of microfluidics. PMID:25748425
Kawano, Yoshihiro; Otsuka, Chino; Sanzo, James; Higgins, Christopher; Nirei, Tatsuo; Schilling, Tobias; Ishikawa, Takuji
2015-01-01
Microfluidics is used increasingly for engineering and biomedical applications due to recent advances in microfabrication technologies. Visualization of bubbles, tracer particles, and cells in a microfluidic device is important for designing a device and analyzing results. However, with conventional methods, it is difficult to observe the channel geometry and such particles simultaneously. To overcome this limitation, we developed a Darkfield Internal Reflection Illumination (DIRI) system that improved the drawbacks of a conventional darkfield illuminator. This study was performed to investigate its utility in the field of microfluidics. The results showed that the developed system could clearly visualize both microbubbles and the channel wall by utilizing brightfield and DIRI illumination simultaneously. The methodology is useful not only for static phenomena, such as clogging, but also for dynamic phenomena, such as the detection of bubbles flowing in a channel. The system was also applied to simultaneous fluorescence and DIRI imaging. Fluorescent tracer beads and channel walls were observed clearly, which may be an advantage for future microparticle image velocimetry (μPIV) analysis, especially near a wall. Two types of cell stained with different colors, and the channel wall, can be recognized using the combined confocal and DIRI system. Whole-slide imaging was also conducted successfully using this system. The tiling function significantly expands the observing area of microfluidics. The developed system will be useful for a wide variety of engineering and biomedical applications for the growing field of microfluidics.
Neutron imaging of hydrogen-rich fluids in geomaterials and engineered porous media: A review
NASA Astrophysics Data System (ADS)
Perfect, E.; Cheng, C.-L.; Kang, M.; Bilheux, H. Z.; Lamanna, J. M.; Gragg, M. J.; Wright, D. M.
2014-02-01
Recent advances in visualization technologies are providing new discoveries as well as answering old questions with respect to the phase structure and flow of hydrogen-rich fluids, such as water and oil, within porous media. Magnetic resonance and x-ray imaging are sometimes employed in this context, but are subject to significant limitations. In contrast, neutrons are ideally suited for imaging hydrogen-rich fluids in abiotic non-hydrogenous porous media because they are strongly attenuated by hydrogen and can "see" through the solid matrix in a non-destructive fashion. This review paper provides an overview of the general principles behind the use of neutrons to image hydrogen-rich fluids in both 2-dimensions (radiography) and 3-dimensions (tomography). Engineering standards for the neutron imaging method are examined. The main body of the paper consists of a comprehensive review of the diverse scientific literature on neutron imaging of static and dynamic experiments involving variably-saturated geomaterials (rocks and soils) and engineered porous media (bricks and ceramics, concrete, fuel cells, heat pipes, and porous glass). Finally some emerging areas that offer promising opportunities for future research are discussed.
Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas
2016-11-01
An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.; Zaitzeff, L. P.; Berge, W. A.
1972-01-01
Flight control and procedural task skill degradation, and the effectiveness of retraining methods were evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Fifteen experienced pilots were trained and then tested after 4 months either without the benefits of practice or with static rehearsal, dynamic rehearsal or with dynamic warmup practice. Performance on both the flight control and procedure tasks degraded significantly after 4 months. The rehearsal methods effectively countered procedure task skill degradation, while dynamic rehearsal or a combination of static rehearsal and dynamic warmup practice was required for the flight control tasks. The quality of the retraining methods appeared to be primarily dependent on the efficiency of visual cue reinforcement.
Overlap of highly FDG-avid and FMISO hypoxic tumor subvolumes in patients with head and neck cancer.
Mönnich, David; Thorwarth, Daniela; Leibfarth, Sara; Pfannenberg, Christina; Reischl, Gerald; Mauz, Paul-Stefan; Nikolaou, Konstantin; la Fougère, Christian; Zips, Daniel; Welz, Stefan
2017-11-01
PET imaging may be used to personalize radiotherapy (RT) by identifying radioresistant tumor subvolumes for RT dose escalation. Using the tracers [ 18 F]-fluorodeoxyglucose (FDG) and [ 18 F]-fluoromisonidazole (FMISO), different aspects of tumor biology can be visualized. FDG depicts various biological aspects, e.g., proliferation, glycolysis and hypoxia, while FMISO is more hypoxia specific. In this study, we analyzed size and overlap of volumes based on the two markers for head-and-neck cancer patients (HNSCC). Twenty five HNSCC patients underwent a CT scan, as well as FDG and dynamic FMISO PET/CT prior to definitive radio-chemotherapy in a prospective FMISO dose escalation study. Three PET-based subvolumes of the primary tumor (GTV prim ) were segmented: a highly FDG-avid volume V FDG , a hypoxic volume on the static FMISO image acquired four hours post tracer injection (V H ) and a retention/perfusion volume (V M ) using pharmacokinetic modeling of dynamic FMISO data. Absolute volumes, overlaps and distances to agreement (DTA) were evaluated. Sizes of PET-based volumes and the GTV prim are significantly different (GTV prim >V FDG >V H >V M ; p < .05). V H is covered by V FDG or DTAs are small (mean coverage 74.4%, mean DTA 1.4 mm). Coverage of V M is less pronounced. With respect to V FDG and V H , the mean coverage is 48.7% and 43.1% and the mean DTA is 5.3 mm and 6.3 mm, respectively. For two patients, DTAs were larger than 2 cm. Hypoxic subvolumes from static PET imaging are typically covered by or in close proximity to highly FDG-avid subvolumes. Therefore, dose escalation to FDG positive subvolumes should cover the static hypoxic subvolumes in most patients, with the disadvantage of larger volumes, resulting in a higher risk of dose-limiting toxicity. Coverage of subvolumes from dynamic FMISO PET is less pronounced. Further studies are needed to explore the relevance of mismatches in functional imaging.
ERIC Educational Resources Information Center
Legenbauer, Tanja; Vocks, Silja; Betz, Sabrina; Puigcerver, Maria Jose Baguena; Benecke, Andrea; Troje, Nikolaus F.; Ruddel, Heinz
2011-01-01
Various components of body image were measured to assess body image disturbances in patients with obesity. To overcome limitations of previous studies, a photo distortion technique and a biological motion distortion device were included to assess static and dynamic aspects of body image. Questionnaires assessed cognitive-affective aspects, bodily…
NASA Astrophysics Data System (ADS)
Pan, Edward A.
Science, technology, engineering, and mathematics (STEM) education is a national focus. Engineering education, as part of STEM education, needs to adapt to meet the needs of the nation in a rapidly changing world. Using computer-based visualization tools and corresponding 3D printed physical objects may help nontraditional students succeed in engineering classes. This dissertation investigated how adding physical or virtual learning objects (called manipulatives) to courses that require mental visualization of mechanical systems can aid student performance. Dynamics is one such course, and tends to be taught using lecture and textbooks with static diagrams of moving systems. Students often fail to solve the problems correctly and an inability to mentally visualize the system can contribute to student difficulties. This study found no differences between treatment groups on quantitative measures of spatial ability and conceptual knowledge. There were differences between treatments on measures of mechanical reasoning ability, in favor of the use of physical and virtual manipulatives over static diagrams alone. There were no major differences in student performance between the use of physical and virtual manipulatives. Students used the physical and virtual manipulatives to test their theories about how the machines worked, however their actual time handling the manipulatives was extremely limited relative to the amount of time they spent working on the problems. Students used the physical and virtual manipulatives as visual aids when communicating about the problem with their partners, and this behavior was also seen with Traditional group students who had to use the static diagrams and gesture instead. The explanations students gave for how the machines worked provided evidence of mental simulation; however, their causal chain analyses were often flawed, probably due to attempts to decrease cognitive load. Student opinions about the static diagrams and dynamic models varied by type of model (static, physical, virtual), but were generally favorable. The Traditional group students, however, indicated that the lack of adequate representation of motion in the static diagrams was a problem, and wished they had access to the physical and virtual models.
NASA Astrophysics Data System (ADS)
Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel
2017-08-01
Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.
Töger, Johannes; Sorensen, Tanner; Somandepalli, Krishna; Toutios, Asterios; Lingala, Sajan Goud; Narayanan, Shrikanth; Nayak, Krishna
2017-01-01
Static anatomical and real-time dynamic magnetic resonance imaging (RT-MRI) of the upper airway is a valuable method for studying speech production in research and clinical settings. The test–retest repeatability of quantitative imaging biomarkers is an important parameter, since it limits the effect sizes and intragroup differences that can be studied. Therefore, this study aims to present a framework for determining the test–retest repeatability of quantitative speech biomarkers from static MRI and RT-MRI, and apply the framework to healthy volunteers. Subjects (n = 8, 4 females, 4 males) are imaged in two scans on the same day, including static images and dynamic RT-MRI of speech tasks. The inter-study agreement is quantified using intraclass correlation coefficient (ICC) and mean within-subject standard deviation (σe). Inter-study agreement is strong to very strong for static measures (ICC: min/median/max 0.71/0.89/0.98, σe: 0.90/2.20/6.72 mm), poor to strong for dynamic RT-MRI measures of articulator motion range (ICC: 0.26/0.75/0.90, σe: 1.6/2.5/3.6 mm), and poor to very strong for velocities (ICC: 0.21/0.56/0.93, σe: 2.2/4.4/16.7 cm/s). In conclusion, this study characterizes repeatability of static and dynamic MRI-derived speech biomarkers using state-of-the-art imaging. The introduced framework can be used to guide future development of speech biomarkers. Test–retest MRI data are provided free for research use. PMID:28599561
Dynamic stimuli: accentuating aesthetic preference biases.
Friedrich, Trista E; Harms, Victoria L; Elias, Lorin J
2014-01-01
Despite humans' preference for symmetry, artwork often portrays asymmetrical characteristics that influence the viewer's aesthetic preference for the image. When presented with asymmetrical images, aesthetic preference is often given to images whose content flows from left-to-right and whose mass is located on the right of the image. Cerebral lateralization has been suggested to account for the left-to-right directionality bias; however, the influence of cultural factors, such as scanning habits, on aesthetic preference biases is debated. The current research investigates aesthetic preference for mobile objects and landscapes, as previous research has found contrasting preference for the two image types. Additionally, the current experiment examines the effects of dynamic movement on directionality preference to test the assumption that static images are perceived as aesthetically equivalent to dynamic images. After viewing mirror-imaged pairs of pictures and videos, right-to-left readers failed to show a preference bias, whereas left-to-right readers preferred stimuli with left-to-right directionality regardless of the location of the mass. The directionality bias in both reading groups was accentuated by the videos, but the bias was significantly stronger in left-to-right readers. The findings suggest that scanning habits moderate the leftward bias resulting from hemispheric specialization and that dynamic stimuli further fluent visual processing.
NASA Astrophysics Data System (ADS)
Disotell, Kevin J.; Nikoueeyan, Pourya; Naughton, Jonathan W.; Gregory, James W.
2016-05-01
Recognizing the need for global surface measurement techniques to characterize the time-varying, three-dimensional loading encountered on rotating wind turbine blades, fast-responding pressure-sensitive paint (PSP) has been evaluated for resolving unsteady aerodynamic effects in incompressible flow. Results of a study aimed at demonstrating the laser-based, single-shot PSP technique on a low Reynolds number wind turbine airfoil in static and dynamic stall are reported. PSP was applied to the suction side of a Delft DU97-W-300 airfoil (maximum thickness-to-chord ratio of 30 %) at a chord Reynolds number of 225,000 in the University of Wyoming open-return wind tunnel. Static and dynamic stall behaviors are presented using instantaneous and phase-averaged global pressure maps. In particular, a three-dimensional pressure topology driven by a stall cell pattern is detected near the maximum lift condition on the steady airfoil. Trends in the PSP-measured pressure topology on the steady airfoil were confirmed using surface oil visualization. The dynamic stall case was characterized by a sinusoidal pitching motion with mean angle of 15.7°, amplitude of 11.2°, and reduced frequency of 0.106 based on semichord. PSP images were acquired at selected phase positions, capturing the breakdown of nominally two-dimensional flow near lift stall, development of post-stall suction near the trailing edge, and a highly three-dimensional topology as the flow reattaches. Structural patterns in the surface pressure topologies are considered from the analysis of the individual PSP snapshots, enabled by a laser-based excitation system that achieves sufficient signal-to-noise ratio in the single-shot images. The PSP results are found to be in general agreement with observations about the steady and unsteady stall characteristics expected for the airfoil.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Visualizing Parallel Computer System Performance
NASA Technical Reports Server (NTRS)
Malony, Allen D.; Reed, Daniel A.
1988-01-01
Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.
ERIC Educational Resources Information Center
Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady
2015-01-01
The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…
Eye Movements Reveal How Task Difficulty Moulds Visual Search
ERIC Educational Resources Information Center
Young, Angela H.; Hulleman, Johan
2013-01-01
In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…
Barrier Effects in Non-retinotopic Feature Attribution
Aydin, Murat; Herzog, Michael H.; Öğmen, Haluk
2011-01-01
When objects move in the environment, their retinal images can undergo drastic changes and features of different objects can be inter-mixed in the retinal image. Notwithstanding these changes and ambiguities, the visual system is capable of establishing correctly feature-object relationships as well as maintaining individual identities of objects through space and time. Recently, by using a Ternus-Pikler display, we have shown that perceived motion correspondences serve as the medium for non-retinotopic attribution of features to objects. The purpose of the work reported in this manuscript was to assess whether perceived motion correspondences provide a sufficient condition for feature attribution. Our results show that the introduction of a static “barrier” stimulus can interfere with the feature attribution process. Our results also indicate that the barrier stops feature attribution based on interferences related to the feature attribution process itself rather than on mechanisms related to perceived motion. PMID:21767561
NASA Astrophysics Data System (ADS)
Tragazikis, I. K.; Exarchos, D. A.; Dalla, P. T.; Matikas, T. E.
2016-04-01
This paper deals with the use of complimentary nondestructive methods for the evaluation of damage in engineering materials. The application of digital image correlation (DIC) to engineering materials is a useful tool for accurate, noncontact strain measurement. DIC is a 2D, full-field optical analysis technique based on gray-value digital images to measure deformation, vibration and strain a vast variety of materials. In addition, this technique can be applied from very small to large testing areas and can be used for various tests such as tensile, torsion and bending under static or dynamic loading. In this study, DIC results are benchmarked with other nondestructive techniques such as acoustic emission for damage localization and fracture mode evaluation, and IR thermography for stress field visualization and assessment. The combined use of these three nondestructive methods enables the characterization and classification of damage in materials and structures.
Visual Contrast Sensitivity in Early-Stage Parkinson's Disease.
Ming, Wendy; Palidis, Dimitrios J; Spering, Miriam; McKeown, Martin J
2016-10-01
Visual impairments are frequent in Parkinson's disease (PD) and impact normal functioning in daily activities. Visual contrast sensitivity is a powerful nonmotor sign for discriminating PD patients from controls. However, it is usually assessed with static visual stimuli. Here we examined the interaction between perception and eye movements in static and dynamic contrast sensitivity tasks in a cohort of mildly impaired, early-stage PD patients. Patients (n = 13) and healthy age-matched controls (n = 12) viewed stimuli of various spatial frequencies (0-8 cyc/deg) and speeds (0°/s, 10°/s, 30°/s) on a computer monitor. Detection thresholds were determined by asking participants to adjust luminance contrast until they could just barely see the stimulus. Eye position was recorded with a video-based eye tracker. Patients' static contrast sensitivity was impaired in the intermediate spatial-frequency range and this impairment correlated with fixational instability. However, dynamic contrast sensitivity and patients' smooth pursuit were relatively normal. An independent component analysis revealed contrast sensitivity profiles differentiating patients and controls. Our study simultaneously assesses perceptual contrast sensitivity and eye movements in PD, revealing a possible link between fixational instability and perceptual deficits. Spatiotemporal contrast sensitivity profiles may represent an easily measurable metric as a component of a broader combined biometric for nonmotor features observed in PD.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Knowledge Acquisition with Static and Animated Pictures in Computer-Based Learning.
ERIC Educational Resources Information Center
Schnotz, Wolfgang; Grzondziel, Harriet
In educational settings, computers provide specific possibilities of visualizing information for instructional purposes. Besides the use of static pictures, computers can present animated pictures which allow exploratory manipulation by the learner and display the dynamic behavior of a system. This paper develops a theoretical framework for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih, E-mail: youngyih@skku.edu
Purpose: In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Methods: Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer’s machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient’s body contour was reconstructed. The accuracy of the image was confirmed against the CT imagemore » of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine’s components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. Results: All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. Conclusions: The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.« less
Visual display and alarm system for wind tunnel static and dynamic loads
NASA Technical Reports Server (NTRS)
Hanly, Richard D.; Fogarty, James T.
1987-01-01
A wind tunnel balance monitor and alarm system developed at NASA Ames Research Center will produce several beneficial results. The costs of wind tunnel delays because of inadvertent balance damage and the costs of balance repair or replacement can be greatly reduced or eliminated with better real-time information on the balance static and dynamic loading. The wind tunnel itself will have enhanced utility with the elimination of overly cautious limits on test conditions. The microprocessor-based system features automatic scaling and 16 multicolored LED bargraphs to indicate both static and dynamic components of the signals from eight individual channels. Five individually programmable alarm levels are available with relay closures for internal or external visual and audible warning devices and other functions such as automatic activation of external recording devices, model positioning mechanisms, or tunnel shutdown.
Visual display and alarm system for wind tunnel static and dynamic loads
NASA Technical Reports Server (NTRS)
Hanly, Richard D.; Fogarty, James T.
1987-01-01
A wind tunnel balance monitor and alarm system developed at NASA Ames Research Center will produce several beneficial results. The costs of wind tunnel delays because of inadvertent balance damage and the costs of balance repair or replacement can be greatly reduced or eliminated with better real-time information on the balance static and dynamic loading. The wind tunnel itself will have enhanced utility with the elimination of overly cautious limits on test conditions. The microprocessor-based system features automatic scaling and 16 multicolored LED bargraphs to indicate both static and dynamic components of the signals from eight individual channels. Five individually programmable alarm levels are available with relay closures for internal or external visual and audible warning devices and other functions such as automatic activation of external recording devices, model positioning mechanism, or tunnel shutdown.
The dynamic-stimulus advantage of visual symmetry perception.
Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko
2008-09-01
It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.
Evaluating methods for controlling depth perception in stereoscopic cinematography
NASA Astrophysics Data System (ADS)
Sun, Geng; Holliman, Nick
2009-02-01
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
Surface morphology of platelet adhesion influenced by activators, inhibitors and shear stress
NASA Astrophysics Data System (ADS)
Watson, Melanie Groan
Platelet activation involves multiple events, one of which is the generation and release of nitric oxide (NO), a platelet aggregation inhibitor. Platelets simultaneously send and receive various agents that promote a positive and negative feedback control system during hemostasis. Although the purpose of platelet-derived NO is not fully understood, NO is known to inhibit platelet recruitment. NO's relatively large diffusion coefficient allows it to diffuse more rapidly than platelet agonists. It may thus be able to inhibit recruitment of platelets near the periphery of a growing thrombus before agonists have substantially accumulated in those regions. Results from two studies in our laboratory differed in the extent to which platelet-derived NO decreased platelet adhesion. Frilot studied the effect of L-arginine (L-A) and NG-Methyl-L-arginine acetate salt (L-NMMA) on platelet adhesion to collagen under static conditions in a Petri dish. Eshaq examined the percent coverage on collagen-coated and fibrinogen-coated microchannels under shear conditions with different levels of L-A and Adenosine Diphosphate (ADP). Frilot's results showed no effect of either L-A or L-NMMA on surface coverage, thrombus size or serotonin release, while Eshaq's results showed a decrease in surface coverage with increased levels of L-A. A possible explanation for these contrasting results is that platelet-derived NO may be more important under flow conditions than under static conditions. For this project, the effects of L-A. ADP and L-NMMA on platelet adhesion were studied at varying shear stresses on protein-coated glass slides. The surface exposed to platelet-rich-plasma in combination with each chemical solution was observed under AFM, FE-SEM and fluorescence microscopy. Quantitative and qualitative comparisons of images obtained with these techniques confirmed the presence of platelets on the protein coatings. AFM images of fibrinogen and collagen-coated slides presented characteristic differences. Adhered platelets were identified, particularly with the AFM. The effects of chemical additives were examined under the same microscopy techniques. The resulting fluorescent microscopy data suggests statistical differences between the percent surface coverage of different shear regions on the glass slides. No statistically significant change in surface coverage was found with the addition of ADP on fibrinogen-coated slides, but showed differences on collagen with all chemicals. However, in high shear regions. L-A produced a significant decrease in platelet adhesion and L-NMMA produced a statistically significant increase in platelet adhesion on fibrinogen and collagen-coated slides. The AFM images of the chemical additives provided no differences between one another except with ADP. The no shear and low shear conditions provided no variations between AFM images via visual confirmation and statistical significance. The only AFM image shear region differences were obtained from low to high shear regions and static to high shear regions comparisons. The objective of this project was to determine whether the static conditions used by Frilot and the dynamic conditions used by Eshaq could explain the different effects of L-A observed in those studies. In addition, the ability of the fluorescent imaging technique to quantify platelet adhesion was examined by comparison of fluorescent imaging to AFM and FE-SEM. The results of this study were consistent with both the lack of an effect of chemical additives under static conditions reported by Frilot and the reduction of platelet adhesion in response to L-A reported by Eshaq.
Technical note: real-time web-based wireless visual guidance system for radiotherapy.
Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho
2017-06-01
Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.
Extracellular matrix motion and early morphogenesis
Loganathan, Rajprasad; Rongish, Brenda J.; Smith, Christopher M.; Filla, Michael B.; Czirok, Andras; Bénazéraf, Bertrand
2016-01-01
For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale ‘total’ cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. PMID:27302396
Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus
Bruno, Aurelio; Ayhan, Inci; Johnston, Alan
2012-01-01
The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778
Flow Visualization on a Small Scale.
1988-03-01
1150 22.43 26 A good tunnel must have very uniform flow across the test section. The uniformity was checked using a seven tube pitot static rake ...calibration. il Figure 7. The Pitot Static Rake 27 To map the entire 15 x 24 inch cross section 84 individual readings and 12 rake locations were required... rake readings was taken, the micromanometer was reattached to the permanent pitot static probe to ensure calibration of the tunnel to .02 inches of
Gold nanoparticle flow sensors designed for dynamic X-ray imaging in biofluids.
Ahn, Sungsook; Jung, Sung Yong; Lee, Jin Pyung; Kim, Hae Koo; Lee, Sang Joon
2010-07-27
X-ray-based imaging is one of the most powerful and convenient methods in terms of versatility in applicable energy and high performance in use. Different from conventional nuclear medicine imaging, contrast agents are required in X-ray imaging especially for effectively targeted and molecularly specific functions. Here, in contrast to much reported static accumulation of the contrast agents in targeted organs, dynamic visualization in a living organism is successfully accomplished by the particle-traced X-ray imaging for the first time. Flow phenomena across perforated end walls of xylem vessels in rice are monitored by a gold nanoparticle (AuNP) (approximately 20 nm in diameter) as a flow tracing sensor working in nontransparent biofluids. AuNPs are surface-modified to control the hydrodynamic properties such as hydrodynamic size (DH), zeta-potential, and surface plasmonic properties in aqueous conditions. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray nanoscopy (XN), and X-ray microscopy (XM) are used to correlate the interparticle interactions with X-ray absorption ability. Cluster formation and X-ray contrast ability of the AuNPs are successfully modulated by controlling the interparticle interactions evaluated as flow-tracing sensors.
NASA Astrophysics Data System (ADS)
Umetani, Keiji; Yagi, Naoto; Suzuki, Yoshio; Ogasawara, Yasuo; Kajiya, Fumihiko; Matsumoto, Takeshi; Tachibana, Hiroyuki; Goto, Masami; Yamashita, Takenori; Imai, Shigeki; Kajihara, Yasumasa
2000-04-01
A microangiography system using monochromatized synchrotron radiation has been investigated as a diagnostic tool for circulatory disorders and early stage malignant tumors. The monochromatized X-rays with energies just above the contrast agent K-absorption edge energy can produce the highest contrast image of the contrast agent in small blood vessels. At SPring-8, digital microradiography with 6 - 24 micrometer pixel sizes has been carried out using two types of detectors designed for X-ray indirect and direct detection. The indirect-sensing detectors are fluorescent-screen optical-lens coupling systems using a high-sensitivity pickup-tube camera and a CCD camera. An X-ray image on the fluorescent screen is focused on the photoconductive layer of the pickup tube and the photosensitive area of the CCD by a small F number lens. The direct-sensing detector consists of an X-ray direct- sensing pickup tube with a beryllium faceplate for X-ray incidence to the photoconductive layer. Absorbed X-rays in the photoconductive layer are directly converted to photoelectrons and then signal charges are readout by electron beam scanning. The direct-sensing detector was expected to have higher spatial resolution in comparison with the indict-sensing detectors. Performance of the X-ray image detectors was examined at the bending magnet beamline BL20B2 using monochromatized X-ray at SPring-8. Image signals from the camera are converted into digital format by an analog-to- digital converter and stored in a frame memory with image format of 1024 X 1024 pixels. In preliminary experiments, tumor vessel specimens using barium contrast agent were prepared for taking static images. The growth pattern of tumor-induced vessels was clearly visualized. Heart muscle specimens were prepared for imaging of 3-dimensional microtomography using the fluorescent-screen CCD camera system. The complex structure of small blood vessels with diameters of 30 - 40 micrometer was visualized as a 3- dimensional CT image.
Static micromixer-coaxial electrospray synthesis of theranostic lipoplexes.
Wu, Yun; Li, Lei; Mao, Yicheng; Lee, Ly James
2012-03-27
Theranostic lipoplexes are an integrated nanotherapeutic system with diagnostic imaging capability and therapeutic functions. They hold great promise to improve current cancer treatments; however, producing uniform theranostic lipoplexes with multiple components in a reproducible manner is a highly challenging task. Conventional methods, such as bulk mixing, are not able to achieve this goal because of their macroscale and random nature. Here we report a novel technique, called the static micromixer-coaxial electrospray (MCE), to synthesize theranostic lipoplexes in a single step with high reproducibility. In this work, quantum dots (QD605) and Cy5-labeled antisense oligodeoxynucleotides (Cy5-G3139) were chosen as the model imaging reagent and therapeutic drug, respectively. Compared with bulk mixing, QD605/Cy5-G3139-loaded lipoplexes produced by MCE were highly uniform with polydispersity of 0.024 ± 0.006 and mean diameter by volume of 194 ± 15 nm. MCE also showed higher encapsulation efficiency of QD605 and Cy5-G3139. QD605 and Cy5 also formed the Förster resonance energy transfer pair, and thus the cellular uptake and intracellular fate of theranostic lipoplexes could be visualized by flow cytometry and confocal microscopy. The lipoplexes were efficiently delivered to A549 cells (non-small cell lung cancer cell line) and down-regulated the Bcl-2 gene expression by 48 ± 6%. © 2012 American Chemical Society
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
NASA Astrophysics Data System (ADS)
Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine
2011-12-01
in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX's applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation.
Vocks, Silja; Legenbauer, Tanja; Rüddel, Heinz; Troje, Nikolaus F
2007-01-01
The aim of the present study was to find out whether in bulimia nervosa the perceptual component of a disturbed body image is restricted to the overestimation of one's own body dimensions (static body image) or can be extended to a misperception of one's own motion patterns (dynamic body image). Participants with bulimia nervosa (n = 30) and normal controls (n = 55) estimated their body dimensions by means of a photo distortion technique and their walking patterns using a biological motion distortion device. Not only did participants with bulimia nervosa overestimate their own body dimensions, but also they perceived their own motion patterns corresponding to a higher BMI than did controls. Static body image was correlated with shape/weight concerns and drive for thinness, whereas dynamic body image was associated with social insecurity and body image avoidance. In bulimia nervosa, body image disturbances can be extended to a dynamic component. (c) 2006 by Wiley Periodicals, Inc.
First-Person Visualizations of the Special and General Theory of Relativity
ERIC Educational Resources Information Center
Kraus, U.
2008-01-01
Visualizations that adopt a first-person point of view allow observation and, in the case of interactive simulations, experimentation with relativistic scenes. This paper gives examples of three types of first-person visualizations: watching objects that move at nearly the speed of light, being a high-speed observer looking at a static environment…
Pigmentary Maculopathy Associated with Chronic Exposure to Pentosan Polysulfate Sodium.
Pearce, William A; Chen, Rui; Jain, Nieraj
2018-05-22
To describe the clinical features of a unique pigmentary maculopathy noted in the setting of chronic exposure to pentosan polysulfate sodium (PPS), a therapy for interstitial cystitis (IC). Retrospective case series. Six adult patients evaluated by a single clinician between May 1, 2015, and October 1, 2017. Patients were identified by query of the electronic medical record system. Local records were reviewed, including results of the clinical examination, retinal imaging, and visual function assessment with static perimetry and electroretinography. Molecular testing assessed for known macular dystrophy and mitochondrial cytopathy genotypes. Mean best-corrected visual acuity (BCVA; in logarithm of the minimum angle of resolution units), median cumulative PPS exposure, subjective nature of the associated visual disturbance, qualitative examination and imaging features, and molecular testing results. The median age at presentation was 60 years (range, 37-62 years). All patients received PPS for a diagnosis of IC, with a median cumulative exposure of 2263 g (range, 1314-2774 g), over a median duration of exposure of 186 months (range, 144-240 months). Most patients (4 of 6) reported difficulty reading as the most bothersome symptom. Mean BCVA was 0.1±0.18 logarithm of the minimum angle of resolution. On fundus examination, nearly all eyes showed subtle paracentral hyperpigmentation at the level of the retinal pigment epithelium (RPE) with a surrounding array of vitelliform-like deposits. Four eyes of 2 patients showed paracentral RPE atrophy, and no eyes demonstrated choroidal neovascularization. Multimodal retinal imaging demonstrated abnormality of the RPE generally contained in a well-delineated area in the posterior pole. None of the 4 patients who underwent molecular testing of nuclear DNA returned a pathogenic mutation. Additionally, all 6 patients showed negative results for pathogenic variants in the mitochondrial gene MTTL1. We describe a novel and possibly avoidable maculopathy associated with chronic exposure to PPS. Patients reported symptoms of difficulty reading and prolonged dark adaptation despite generally intact visual acuity and subtle funduscopic findings. Multimodal imaging and functional studies are suggestive of a primary RPE injury. Additional investigation is warranted to explore causality further. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsuchiya, Yuichiro; Kodera, Yoshie; Tanaka, Rie; Sanada, Shigeru
2007-03-01
Early detection and treatment of lung cancer is one of the most effective means to reduce cancer mortality; chest X-ray radiography has been widely used as a screening examination or health checkup. The new examination method and the development of computer analysis system allow obtaining respiratory kinetics by the use of flat panel detector (FPD), which is the expanded method of chest X-ray radiography. Through such changes functional evaluation of respiratory kinetics in chest has become available. Its introduction into clinical practice is expected in the future. In this study, we developed the computer analysis algorithm for the purpose of detecting lung nodules and evaluating quantitative kinetics. Breathing chest radiograph obtained by modified FPD was converted into 4 static images drawing the feature, by sequential temporal subtraction processing, morphologic enhancement processing, kinetic visualization processing, and lung region detection processing, after the breath synchronization process utilizing the diaphragmatic analysis of the vector movement. The artificial neural network used to analyze the density patterns detected the true nodules by analyzing these static images, and drew their kinetic tracks. For the algorithm performance and the evaluation of clinical effectiveness with 7 normal patients and simulated nodules, both showed sufficient detecting capability and kinetic imaging function without statistically significant difference. Our technique can quantitatively evaluate the kinetic range of nodules, and is effective in detecting a nodule on a breathing chest radiograph. Moreover, the application of this technique is expected to extend computer-aided diagnosis systems and facilitate the development of an automatic planning system for radiation therapy.
Measured daylighting potential of a static optical louver system under real sun and sky conditions
Konis, Kyle; Lee, Eleanor S.
2015-05-04
Side-by-side comparisons were made over solstice-to-solstice changes in sun and sky conditions between an optical louver system (OLS) and a conventional Venetian blind set at a horizontal slat angle and located inboard of a south-facing, small-area, clerestory window in a full-scale office testbed. Daylight autonomy (DA), window luminance, and ceiling luminance uniformity were used to assess performance. The performance of both systems was found to have significant seasonal variation, where performance under clear sky conditions improved as maximum solar altitude angles transitioned from solstice to equinox. Although the OLS produced fewer hours per day of DA on average than themore » Venetian blind, the OLS never exceeded the designated 2000 cd/m2 threshold for window glare. In contrast, the Venetian blind was found to exceed the visual discomfort threshold over a large fraction of the day during equinox conditions. Notably, these peak periods of visual discomfort occurred during the best periods of daylighting performance. Luminance uniformity was analyzed using calibrated high dynamic range luminance images. Under clear sky conditions, the OLS was found to increase the luminance of the ceiling as well as produce a more uniform distribution. Furthermore, compared to conventional venetian blinds, the static optical sunlight redirecting system studied has the potential to significantly reduce the annual electrical lighting energy demand of a daylit space and improve the quality from the perspective of building occupants by consistently transmitting useful daylight while eliminating window glare.« less
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Visualizing the Effects of Sputum on Biofilm Development Using a Chambered Coverglass Model.
Beaudoin, Trevor; Kennedy, Sarah; Yau, Yvonne; Waters, Valerie
2016-12-14
Biofilms consist of groups of bacteria encased in a self-secreted matrix. They play an important role in industrial contamination as well as in the development and persistence of many health related infections. One of the most well described and studied biofilms in human disease occurs in chronic pulmonary infection of cystic fibrosis patients. When studying biofilms in the context of the host, many factors can impact biofilm formation and development. In order to identify how host factors may affect biofilm formation and development, we used a static chambered coverglass method to grow biofilms in the presence of host-derived factors in the form of sputum supernatants. Bacteria are seeded into chambers and exposed to sputum filtrates. Following 48 hr of growth, biofilms are stained with a commercial biofilm viability kit prior to confocal microscopy and analysis. Following image acquisition, biofilm properties can be assessed using different software platforms. This method allows us to visualize key properties of biofilm growth in presence of different substances including antibiotics.
Using Schlieren Visualization to Track Detonator Performance
NASA Astrophysics Data System (ADS)
Clarke, Steven; Thomas, Keith; Martinez, Michael; Akinci, Adrian; Murphy, Michael; Adrian, Ronald
2007-06-01
Several experiments that are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation will be presented. High Speed Laser Schlieren Movies have been used to study several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on ``cut-back'' detonators with varying initial pressing (IP) heights. We have also used this diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. EPIC Hydrodynamic code has been used to analyze the shock fronts from the Schlieren images to reverse calculate likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. LA-UR-07-1229
Neural correlates of auditory short-term memory in rostral superior temporal cortex.
Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo
2014-12-01
Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gravity modulates Listing's plane orientation during both pursuit and saccades
NASA Technical Reports Server (NTRS)
Hess, Bernhard J M.; Angelaki, Dora E.
2003-01-01
Previous studies have shown that the spatial organization of all eye orientations during visually guided saccadic eye movements (Listing's plane) varies systematically as a function of static and dynamic head orientation in space. Here we tested if a similar organization also applies to the spatial orientation of eye positions during smooth pursuit eye movements. Specifically, we characterized the three-dimensional distribution of eye positions during horizontal and vertical pursuit (0.1 Hz, +/-15 degrees and 0.5 Hz, +/-8 degrees) at different eccentricities and elevations while rhesus monkeys were sitting upright or being statically tilted in different roll and pitch positions. We found that the spatial organization of eye positions during smooth pursuit depends on static orientation in space, similarly as during visually guided saccades and fixations. In support of recent modeling studies, these results are consistent with a role of gravity on defining the parameters of Listing's law.
Eye Gaze during Observation of Static Faces in Deaf People
Watanabe, Katsumi; Matsuda, Tetsuya; Nishioka, Tomoyuki; Namatame, Miki
2011-01-01
Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face) more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people. PMID:21359223
Adaptation mechanisms, eccentricity profiles, and clinical implementation of red-on-white perimetry.
Zele, Andrew J; Dang, Trung M; O'Loughlin, Rebecca K; Guymer, Robyn H; Harper, Alex; Vingrys, Algis J
2008-05-01
To determine the visual adaptation and retinal eccentricity profiles for red flickering and static test stimuli and report a clinical implementation of these stimuli in visual perimetry. The adaptation profile for red-on-white perimetry stimuli was measured using a threshold vs. intensity (TvI) paradigm at 0 degree and 12 degrees eccentricity and by comparing the eccentricity-related sensitivity change for red and white, static, and flickering targets in young normal trichromats (n = 5) and a group of dichromats (n = 5). A group of older normal control observers (n = 30) were tested and retinal disease was evaluated in persons having age-related maculopathy (n = 35) and diabetes (n = 12). Adaptation and eccentricity profiles indicate red static and flickering targets are detected by two mechanisms in the paramacular region, and a single mechanism for >5 degrees eccentricity. The group data for the older normal observers has a high level of inter-observer variability with a generalized reduction in sensitivity across the entire visual field. Group data for the participants with age-related maculopathy show reduced sensitivities that were pronounced in the central retina. The group data for the diabetic observers showed sensitivities that were reduced at all eccentricities. The disease-related sensitivity decline was more apparent with red than white stimuli. The adaptation profile and change in sensitivity with retinal eccentricity for the red-on-white perimetric stimuli are consistent with two detection processes. In the macula, the putative detection mechanism is color-opponent with static targets and non-opponent with flickering targets. At peripheral field locations, the putative detection mechanism is non-opponent for both static and flicker targets. The long-wavelength stimuli are less affected by the preretinal absorption common to aging. Red-on-white static and flicker perimetry may be useful for monitoring retinal disease, revealing greater abnormalities compared with conventional white-on-white perimetry, especially in the macula where two detection mechanisms are found.
Peripheral prism glasses: effects of moving and stationary backgrounds.
Shen, Jieming; Peli, Eli; Bowers, Alex R
2015-04-01
Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance and partial suppression of the prism image, thereby limiting device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared with monocular viewing. Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than in monocular (prism eye) viewing on the motion background (medians, 13 and 58%, respectively, p = 0.008) but not the still frame background (medians, 63 and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in one HH and one normally sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations.
Peripheral Prism Glasses: Effects of Moving and Stationary Backgrounds
Shen, Jieming; Peli, Eli; Bowers, Alex R.
2015-01-01
Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance (partial local suppression) of the prism image and limit device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared to monocular viewing. Methods Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. Results With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than monocular (prism eye) viewing on the motion background (medians 13% and 58%, respectively, p = 0.008), but not the still frame background (63% and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in 1 HH and 1 normally-sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conclusions Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations. PMID:25785533
Shariati, Farzaneh; Aryana, Kamran; Fattahi, Asiehsadat; Forghani, Mohammad N; Azarian, Azita; Zakavi, Seyed R; Sadeghi, Ramin; Ayati, Narjes; Sadri, Keyvan
2014-06-01
In this study, we evaluated the diagnostic accuracy of (99m)Tc-bombesin scintigraphy for differentiation of benign from malignant palpable breast lesions. (99m)Tc-Bombesin is a tracer with high affinity for gastrin-releasing peptide receptor, which is overexpressed on a variety of human tumors including breast carcinoma. We examined 33 consecutive women who were referred to our center with suspicious palpable breast lesions but had no definitive diagnosis in other imaging procedures. A volume of 370-444 MBq of (99m)Tc-bombesin was injected and dynamic 1-min images were taken for 20 min immediately after injection in anterior view. Thereafter, two static images in anterior and prone-lateral views were taken for 5 min. Finally, single-photon emission computed tomography images were taken for each patient. Definitive diagnosis was based on biopsy and histopathological evaluation. The scan findings were positive in 19 patients and negative in 11 on visual assessment of the planar and single-photon emission computed tomography images. Pathologic examination confirmed breast carcinoma in 12 patients with positive scans and benign pathology for 18 patients. The overall sensitivity, specificity, negative and positive predictive values, and accuracy of this radiotracer for diagnosis of breast cancer were 100, 66.1, 100, 63, and 76%, respectively. Semiquantitative analysis improved the specificity of the visual assessment from 66 to 84%. Our study showed that (99m)Tc-bombesin scintigraphy has a high sensitivity and negative predictive value for detecting malignant breast lesions, but the specificity and positive predictive value of this radiotracer for differentiation of malignant breast abnormalities from benign ones are relatively low.
NASA Astrophysics Data System (ADS)
Weiss, Renee Elizabeth
A study was conducted involving 118 undergraduate college students to determine the form of instruction that would result in better understanding and retention of an abstract subject involving motion. It was found, as had been hypothesized, that instruction with animated concrete graphics resulted in statistically significant higher achievement compared with concrete static, abstract static, abstract animated or text only instruction. This finding held true for all three levels of difficulty of the posttest questions. The reasons for this result are proposed to be concrete animation's attention-gaining quality resulting in its inclusion in the selective perception process, reduction in the level of abstraction to aid the short term memory in clearly encoding the information in long-term memory, and because the subject lent itself to imaging, thus dual coding was used beneficially. The animation alone or the concreteness of the illustration alone did not explain the result as shown by the lesson performance of these groups. It was only the combination of animation and concreteness of visual that resulted in higher achievement. There was no relationship between the time spent on task and achievement. There was a weak correlation between the number of times the animation button was used and achievement on the immediate posttest. This did not hold true for the delayed posttest. It was concluded that the difference in achievement was due to the nature of the graphic and not the result of time spent on task. The majority (86%) of the participants found graphics to be useful in their understanding of the subject, and 41% of the text only group commented that graphics would have helped in their understanding. Most of the participants used some form of mental imagery in answering the posttest questions, although a few claimed they did not use mental images.
Walker, Robin; Bryan, Lauren; Harvey, Hannah; Riazi, Afsane; Anderson, Stephen J
2016-07-01
Technological devices such as smartphones and tablets are widely available and increasingly used as visual aids. This study evaluated the use of a novel app for tablets (MD_evReader) developed as a reading aid for individuals with a central field loss resulting from macular degeneration. The MD_evReader app scrolls text as single lines (similar to a news ticker) and is intended to enhance reading performance using the eccentric viewing technique by both reducing the demands on the eye movement system and minimising the deleterious effects of perceptual crowding. Reading performance with scrolling text was compared with reading static sentences, also presented on a tablet computer. Twenty-six people with low vision (diagnosis of macular degeneration) read static or dynamic text (scrolled from right to left), presented as a single line at high contrast on a tablet device. Reading error rates and comprehension were recorded for both text formats, and the participant's subjective experience of reading with the app was assessed using a simple questionnaire. The average reading speed for static and dynamic text was not significantly different and equal to or greater than 85 words per minute. The comprehension scores for both text formats were also similar, equal to approximately 95% correct. However, reading error rates were significantly (p = 0.02) less for dynamic text than for static text. The participants' questionnaire ratings of their reading experience with the MD_evReader were highly positive and indicated a preference for reading with this app compared with their usual method. Our data show that reading performance with scrolling text is at least equal to that achieved with static text and in some respects (reading error rate) is better than static text. Bespoke apps informed by an understanding of the underlying sensorimotor processes involved in a cognitive task such as reading have excellent potential as aids for people with visual impairments. © 2016 The Authors Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.
Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun
2002-06-01
Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Volumic visual perception: principally novel concept
NASA Astrophysics Data System (ADS)
Petrov, Valery
1996-01-01
The general concept of volumic view (VV) as a universal property of space is introduced. VV exists in every point of the universe where electromagnetic (EM) waves can reach and a point or a quasi-point receiver (detector) of EM waves can be placed. Classification of receivers is given for the first time. They are classified into three main categories: biological, man-made non-biological, and mathematically specified hypothetical receivers. The principally novel concept of volumic perception is introduced. It differs chiefly from the traditional concept which traces back to Euclid and pre-Euclidean times and much later to Leonardo da Vinci and Giovanni Battista della Porta's discoveries and practical stereoscopy as introduced by C. Wheatstone. The basic idea of novel concept is that humans and animals acquire volumic visual data flows in series rather than in parallel. In this case the brain is free from extremely sophisticated real time parallel processing of two volumic visual data flows in order to combine them. Such procedure seems hardly probable even for humans who are unable to combine two primitive static stereoscopic images in one quicker than in a few seconds. Some people are unable to perform this procedure at all.
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
ERIC Educational Resources Information Center
Akaygun, Sevil
2016-01-01
Visualizing the chemical structure and dynamics of particles has been challenging for many students; therefore, various visualizations and tools have been used in chemistry education. For science educators, it has been important to understand how students visualize and represent particular phenomena--i.e., their mental models-- to design more…
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-01-01
Background Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. Results lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. Conclusion lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired. PMID:17877794
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context.
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-09-18
Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired.
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
The 1980 and 1981 accident experience of civil airmen with selected visual pathology.
DOT National Transportation Integrated Search
1983-07-01
In studies of the 1974-76 accident experience of U.S. general aviation pilots with static physical defects, all the significantly increased rates and ratios were for visual defect categories--blindness, or absence of either eye, deficient distant vis...
Multiple Views of Space: Continuous Visual Flow Enhances Small-Scale Spatial Learning
ERIC Educational Resources Information Center
Holmes, Corinne A.; Marchette, Steven A.; Newcombe, Nora S.
2017-01-01
In the real word, we perceive our environment as a series of static and dynamic views, with viewpoint transitions providing a natural link from one static view to the next. The current research examined if experiencing such transitions is fundamental to learning the spatial layout of small-scale displays. In Experiment 1, participants viewed a…
Stimulus-Driven Attentional Capture by a Static Discontinuity between Perceptual Groups
ERIC Educational Resources Information Center
Burnham, Bryan R.; Neely, James H.; Naginsky, Yelena; Thomas, Matthew
2010-01-01
After C. L. Folk, R. W. Remington, and J. C. Johnston (1992) proposed their contingent-orienting hypothesis, there has been an ongoing debate over whether purely stimulus-driven attentional capture can occur for visual events that are salient by virtue of a distinctive static property (as opposed to a dynamic property such as abrupt onset). The…
High-sensitivity strain visualization using electroluminescence technologies
NASA Astrophysics Data System (ADS)
Xu, Jian; Jo, Hongki
2016-04-01
Visualizing mechanical strain/stress changes is an emerging area in structural health monitoring. Several ways are available for strain change visualization through the color/brightness change of the materials subjected to the mechanical stresses, for example, using mechanoluminescence (ML) materials and mechanoresponsive polymers (MRP). However, these approaches were not effectively applicable for civil engineering system yet, due to insufficient sensitivity to low-level strain of typical civil structures and limitation in measuring both static and dynamic strain. In this study, design and validation for high-sensitivity strain visualization using electroluminescence technologies are presented. A high-sensitivity Wheatstone bridge, of which bridge balance is precisely controllable circuits, is used with a gain-adjustable amplifier. The monochrome electroluminescence (EL) technology is employed to convert both static and dynamic strain change into brightness/color change of the EL materials, through either brightness change mode (BCM) or color alternation mode (CAM). A prototype has been made and calibrated in lab, the linearity between strain and brightness change has been investigated.
Keshner, E A; Kenyon, R V
2000-01-01
We examined the effect of a 3-dimensional stereoscopic scene on segmental stabilization. Eight subjects participated in static sway and locomotion experiments with a visual scene that moved sinusoidally or at constant velocity about the pitch or roll axes. Segmental displacements, Fast Fourier Transforms, and Root Mean Square values were calculated. In both pitch and roll, subjects exhibited greater magnitudes of motion in head and trunk than ankle. Smaller amplitudes and frequent phase reversals suggested control of the ankle by segmental proprioceptive inputs and ground reaction forces rather than by the visual-vestibular signals. Postural controllers may set limits of motion at each body segment rather than be governed solely by a perception of the visual vertical. Two locomotor strategies were also exhibited, implying that some subjects could override the effect of the roll axis optic flow field. Our results demonstrate task dependent differences that argue against using static postural responses to moving visual fields when assessing more dynamic tasks.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Mapping as a visual health communication tool: promises and dilemmas.
Parrott, Roxanne; Hopfer, Suellen; Ghetian, Christie; Lengerich, Eugene
2007-01-01
In the era of evidence-based public health promotion and planning, the use of maps as a form of evidence to communicate about the multiple determinants of cancer is on the rise. Geographic information systems and mapping technologies make future proliferation of this strategy likely. Yet disease maps as a communication form remain largely unexamined. This content analysis considers the presence of multivariate information, credibility cues, and the communication function of publicly accessible maps for cancer control activities. Thirty-six state comprehensive cancer control plans were publicly available in July 2005 and were reviewed for the presence of maps. Fourteen of the 36 state cancer plans (39%) contained map images (N = 59 static maps). A continuum of map inter activity was observed, with 10 states having interactive mapping tools available to query and map cancer information. Four states had both cancer plans with map images and interactive mapping tools available to the public on their Web sites. Of the 14 state cancer plans that depicted map images, two displayed multivariate data in a single map. Nine of the 10 states with interactive mapping capability offered the option to display multivariate health risk messages. The most frequent content category mapped was cancer incidence and mortality, with stage at diagnosis infrequently available. The most frequent communication function served by the maps reviewed was redundancy, as maps repeated information contained in textual forms. The social and ethical implications for communicating about cancer through the use of visual geographic representations are discussed.
Mariappan, Leo; Hu, Gang; He, Bin
2014-02-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.
Jet Evolution Visualized and Quantified Using Filtered Rayleigh Scattering
NASA Technical Reports Server (NTRS)
Reeder, Mark F.
1996-01-01
Filtered Rayleigh scattering was utilized as a flow diagnostic in an investigation of a method for enhancing mixing in supersonic jets. The primary objectives of the study were to visualize the effect of vortex generating tabs on supersonic jets, to exact quantitative data from these planar visualizations, and to detect the presence of secondary flows (i.e., streamwise vorticity) generated by the tabs. An injection seeded frequency-doubled Nd:YAG was the light source and a 14 bit Princeton Instruments iodine charge coupled display (ICCD) camera recorded the image through an iodine cell. The incident wave length of the laser was held constant for each flow case so that the filter absorbed unwanted background light, but permitted part of the thermally broadened Rayleigh scattering light to pas through. The visualizations were performed for axisymmetric jets (D=1.9 cm) operated at perfectly expanded conditions for Mach 1.0, 1.5, and 2.0. All data were recorded for the jet cross section at x/D=3. One hundred instantaneous images were recorded and averaged for each case, with a threshold set to eliminate unavoidable particulate scattering. A key factor in these experiments was that the stagnation air was heated such that the expansion of the flow in the nozzle resulted in the static temperature in the jet being equal to the ambient temperature, assuming isentropic flow. Since the thermodynamic conditions of the flow were approximately the same for each case, increases in the intensity recorded by the ICCD camera could be directly attributed to the Doppler shift, and hence velocity. Visualizations were performed for Mach 1.5 and Mach 2.0 jets with tabs inserted at the nozzle exit. The distortion of the jet was readily apparent and was consistent with Mie scattering-based visualizations. Asymmetry in the intensities of the images indicate the presence of secondary flow patterns which are consistent with the streamwise vortices measured using more traditional diagnostics in subsonic jets with the same tab configurations. Because each tab causes shocks to form, the assumption of isentropic flow is not valid for these cases. However, within a reasonable first-order estimation,the intensity across the illuminated plane for these cases can be related to a value combining density and velocity.
Three new renal simulators for use in nuclear medicine
NASA Astrophysics Data System (ADS)
Dullius, Marcos; Fonseca, Mateus; Botelho, Marcelo; Cunha, Clêdison; Souza, Divanízia
2014-03-01
Renal scintigraphy is useful to provide both functional and anatomic information of renal flow of cortical functions and evaluation of pathological collecting system. The objective of this study was develop and evaluate the performance of three renal phantoms: Two anthropomorphic static and another dynamic. The static images of the anthropomorphic phantoms were used for comparison with static renal scintigraphy with 99mTc-DMSA in different concentrations. These static phantoms were manufactured in two ways: one was made of acrylic using as mold a human kidney preserved in formaldehyde and the second was built with ABS (acrylonitrile butadiene styrene) in a 3D printer. The dynamic renal phantom was constructed of acrylic to simulate renal dynamics in scintigraphy with 99mTc-DTPA. These phantoms were scanned with static and dynamic protocols and compared with clinical data. Using these phantoms it is possible to acquire similar renal images as in the clinical scintigraphy. Therefore, these new renal phantoms can be very effective for use in the quality control of renal scintigraphy, and image processing systems.
Motion facilitates face perception across changes in viewpoint and expression in older adults.
Maguinness, Corrina; Newell, Fiona N
2014-12-01
Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Richardson, C A; Hides, J A; Wilson, S; Stanton, W; Snijders, C J
2004-07-01
The antigravity muscles of the lumbo-pelvic region, especially transversus abdominis (TrA), are important for the protection and support of the weightbearing joints. Measures of TrA function (the response to the postural cue of drawing in the abdominal wall) have been developed and quantified using magnetic resonance imaging (MRI). Cross-sections through the trunk allowed muscle contraction as well as the large fascial attachments of the TrA to be visualized. The cross sectional area (CSA) of the deep musculo-fascial system was measured at rest and in the contracted state, using static images as well as a cine sequence. In this developmental study, MRI measures were undertaken on a small sample of low back pain (LBP) and non LBP subjects. Results demonstrated that, in non LBP subjects, the draw in action produced a symmetrical deep musculo-fascial "corset" which encircles the abdomen. This study demonstrated a difference in this "corset" measure between subjects with and without LBP. These measures may also prove useful to quantify the effect of unloading in bedrest and microgravity exposure.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Appelbaum, Meghan
2010-01-01
The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.
Visuospatial Cognition in Electronic Learning
ERIC Educational Resources Information Center
Shah, Priti; Freedman, Eric G.
2003-01-01
Static, animated, and interactive visualizations are frequently used in electronic learning environments. In this article, we provide a brief review of research on visuospatial cognition relevant to designing e-learning tools that use these displays. In the first section, we discuss the possible cognitive benefits of visualizations consider used…
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Extracellular matrix motion and early morphogenesis.
Loganathan, Rajprasad; Rongish, Brenda J; Smith, Christopher M; Filla, Michael B; Czirok, Andras; Bénazéraf, Bertrand; Little, Charles D
2016-06-15
For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale 'total' cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. © 2016. Published by The Company of Biologists Ltd.
She, Hoi Lam; Roest, Arno A W; Calkoen, Emmeline E; van den Boogaard, Pieter J; van der Geest, Rob J; Hazekamp, Mark G; de Roos, Albert; Westenberg, Jos J M
2017-01-01
To evaluate the inflow pattern and flow quantification in patients with functional univentricular heart after Fontan's operation using 4D flow magnetic resonance imaging (MRI) with streamline visualization when compared with the conventional 2D flow approach. Seven patients with functional univentricular heart after Fontan's operation and twenty-three healthy controls underwent 4D flow MRI. In two orthogonal two-chamber planes, streamline visualization was applied, and inflow angles with peak inflow velocity (PIV) were measured. Transatrioventricular flow quantification was assessed using conventional 2D multiplanar reformation (MPR) and 4D MPR tracking the annulus and perpendicular to the streamline inflow at PIV, and they were validated with net forward aortic flow. Inflow angles at PIV in the patient group demonstrated wide variation of angles and directions when compared with the control group (P < .01). The use of 4D flow MRI with streamlines visualization in quantification of the transatrioventricular flow had smaller limits of agreement (2.2 ± 4.1 mL; 95% limit of agreement -5.9-10.3 mL) when compared with the static plane assessment from 2DFlow MRI (-2.2 ± 18.5 mL; 95% limit of agreement agreement -38.5-34.1 mL). Stronger correlation was present in the 4D flow between the aortic and trans-atrioventricular flow (R 2 correlation in 4D flow: 0.893; in 2D flow: 0.786). Streamline visualization in 4D flow MRI confirmed variable atrioventricular inflow directions in patients with functional univentricular heart with previous Fontan's procedure. 4D flow aided generation of measurement planes according to the blood flood dynamics and has proven to be more accurate than the fixed plane 2D flow measurements when calculating flow quantifications. © 2016 Wiley Periodicals, Inc.
Matching Voice and Face Identity from Static Images
ERIC Educational Resources Information Center
Mavica, Lauren W.; Barenholtz, Elan
2013-01-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously…
Representational Momentum in Older Adults
ERIC Educational Resources Information Center
Piotrowski, Andrea S.; Jakobson, Lorna S.
2011-01-01
Humans have a tendency to perceive motion even in static images that simply "imply" movement. This tendency is so strong that our memory for actions depicted in static images is distorted in the direction of implied motion--a phenomenon known as representational momentum (RM). In the present study, we created an RM display depicting a pattern of…
Development of visual cortical function in infant macaques: A BOLD fMRI study
Meeson, Alan; Munk, Matthias H. J.; Kourtzi, Zoe; Movshon, J. Anthony; Logothetis, Nikos K.; Kiorpes, Lynne
2017-01-01
Functional brain development is not well understood. In the visual system, neurophysiological studies in nonhuman primates show quite mature neuronal properties near birth although visual function is itself quite immature and continues to develop over many months or years after birth. Our goal was to assess the relative development of two main visual processing streams, dorsal and ventral, using BOLD fMRI in an attempt to understand the global mechanisms that support the maturation of visual behavior. Seven infant macaque monkeys (Macaca mulatta) were repeatedly scanned, while anesthetized, over an age range of 102 to 1431 days. Large rotating checkerboard stimuli induced BOLD activation in visual cortices at early ages. Additionally we used static and dynamic Glass pattern stimuli to probe BOLD responses in primary visual cortex and two extrastriate areas: V4 and MT-V5. The resulting activations were analyzed with standard GLM and multivoxel pattern analysis (MVPA) approaches. We analyzed three contrasts: Glass pattern present/absent, static/dynamic Glass pattern presentation, and structured/random Glass pattern form. For both GLM and MVPA approaches, robust coherent BOLD activation appeared relatively late in comparison to the maturation of known neuronal properties and the development of behavioral sensitivity to Glass patterns. Robust differential activity to Glass pattern present/absent and dynamic/static stimulus presentation appeared first in V1, followed by V4 and MT-V5 at older ages; there was no reliable distinction between the two extrastriate areas. A similar pattern of results was obtained with the two analysis methods, although MVPA analysis showed reliable differential responses emerging at later ages than GLM. Although BOLD responses to large visual stimuli are detectable, our results with more refined stimuli indicate that global BOLD activity changes as behavioral performance matures. This reflects an hierarchical development of the visual pathways. Since fMRI BOLD reflects neural activity on a population level, our results indicate that, although individual neurons might be adult-like, a longer maturation process takes place on a population level. PMID:29145469
Dynamic 68Ga-DOTATOC PET/CT and static image in NET patients. Correlation of parameters during PRRT.
Van Binnebeek, Sofie; Koole, Michel; Terwinghe, Christelle; Baete, Kristof; Vanbilloen, Bert; Haustermans, Karine; Clement, Paul M; Bogaerts, Kris; Verbruggen, Alfons; Nackaerts, Kris; Van Cutsem, Eric; Verslype, Chris; Mottaghy, Felix M; Deroose, Christophe M
2016-06-28
To investigate the relationship between the dynamic parameters (Ki) and static image-derived parameters of 68Ga-DOTATOC-PET, to determine which static parameter best reflects underlying somatostatin-receptor-expression (SSR) levels on neuroendocrine tumours (NETs). 20 patients with metastasized NETs underwent a dynamic and static 68Ga-DOTATOC-PET before PRRT and at 7 and 40 weeks after the first administration of 90Y-DOTATOC (in total 4 cycles were planned); 175 lesions were defined and analyzed on the dynamic as well as static scans. Quantitative analysis was performed using the software PMOD. One to five target lesions per patient were chosen and delineated manually on the baseline dynamic scan and further, on the corresponding static 68Ga-DOTATOC-PET and the dynamic and static 68Ga-DOTATOC-PET at the other time-points; SUVmax and SUVmean of the lesions was assessed on the other six scans. The input function was retrieved from the abdominal aorta on the images. Further on, Ki was calculated using the Patlak-Plot. At last, 5 reference regions for normalization of SUVtumour were delineated on the static scans resulting in 5 ratios (SUVratio). SUVmax and SUVmean of the tumoural lesions on the dynamic 68Ga-DOTATOC-PET had a very strong correlation with the corresponding parameters in the static scan (R²: 0.94 and 0.95 respectively). SUVmax, SUVmean and Ki of the lesions showed a good linear correlation; the SUVratios correlated poorly with Ki. A significantly better correlation was noticed between Ki and SUVtumour(max and mean) (p < 0.0001). As the dynamic parameter Ki correlates best with the absolute SUVtumour, SUVtumour best reflects underlying SSR-levels in NETs.
Carlson, Victor R; Sheehan, Frances T; Shen, Aricia; Yao, Lawrence; Jackson, Jennifer N; Boden, Barry P
2017-07-01
The tibial tubercle to trochlear groove (TT-TG) distance is used for screening patients with a variety of patellofemoral joint disorders to determine who may benefit from patellar medialization using a tibial tubercle osteotomy. Clinically, the TT-TG distance is predominately based on static imaging with the knee in full extension; however, the predictive ability of this measure for dynamic patellar tracking patterns is unknown. To determine whether the static TT-TG distance can predict dynamic lateral displacement of the patella. Cohort study (Diagnosis); Level of evidence, 2. The static TT-TG distance was measured at full extension for 70 skeletally mature subjects with (n = 32) and without (n = 38) patellofemoral pain. The dynamic patellar tracking patterns were assessed from approximately 45° to 0° of knee flexion by use of dynamic cine-phase contrast magnetic resonance imaging. For each subject, the value of dynamic lateral tracking corresponding to the exact knee angle measured in the static images for that subject was identified. Linear regression analysis determined the predictive ability of static TT-TG distance for dynamic patellar lateral displacement for each cohort. The static TT-TG distance measured with the knee in full extension cannot accurately predict dynamic lateral displacement of the patella. There was weak predictive ability among subjects with patellofemoral pain ( r 2 = 0.18, P = .02) and no predictive capability among controls. Among subjects with patellofemoral pain and static TT-TG distances 15 mm or more, 8 of 13 subjects (62%) demonstrated neutral or medial patellar tracking patterns. The static TT-TG distance cannot accurately predict dynamic lateral displacement of the patella. A large percentage of patients with patellofemoral pain and pathologically large TT-TG distances may have neutral to medial maltracking patterns.
NASA Astrophysics Data System (ADS)
Chao, Zhang; Shijie, Su; Yilin, Yang; Guofu, Wang; Chao, Wang
2017-11-01
Aiming at the static balance of the controllable pitch propeller (CPP), a high efficiency static balance method based on the double-layer structure of the measuring table and gantry robot is adopted to realize the integration of torque measurement and corrected polish for controllable pitch propeller blade. The control system was developed by Microsoft Visual Studio 2015, and a composite platform prototype was developed. Through this prototype, conduct an experiment on the complete process of torque measurement and corrected polish based on a 300kg class controllable pitch propeller blade. The results show that the composite platform can correct the static balance of blade with a correct, efficient and labor-saving operation, and can replace the traditional method on static balance of the blade.
Contour junctions defined by dynamic image deformations enhance perceptual transparency.
Kawabe, Takahiro; Nishida, Shin'ya
2017-11-01
The majority of work on the perception of transparency has focused on static images with luminance-defined contour junctions, but recent work has shown that dynamic image sequences with dynamic image deformations also provide information about transparency. The present study demonstrates that when part of a static image is dynamically deformed, contour junctions at which deforming and nondeforming contours are connected facilitate the deformation-based perception of a transparent layer. We found that the impression of a transparent layer was stronger when a dynamically deforming area was adjacent to static nondeforming areas than when presented alone. When contour junctions were not formed at the dynamic-static boundaries, however, the impression of a transparent layer was not facilitated by the presence of static surrounding areas. The effect of the deformation-defined junctions was attenuated when the spatial pattern of luminance contrast at the junctions was inconsistent with the perceived transparency related to luminance contrast, while the effect did not change when the spatial luminance pattern was consistent with it. In addition, the results showed that contour completions across the junctions were required for the perception of a transparent layer. These results indicate that deformation-defined junctions that involve contour completion between deforming and nondeforming regions enhance the perception of a transparent layer, and that the deformation-based perceptual transparency can be promoted by the simultaneous presence of appropriately configured luminance and contrast-other features that can also by themselves produce the sensation of perceiving transparency.
ERIC Educational Resources Information Center
Wu, Chih-Fu; Chiang, Ming-Chin
2013-01-01
This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…
Technology-Based Content through Virtual and Physical Modeling: A National Research Study
ERIC Educational Resources Information Center
Ernst, Jeremy V.; Clark, Aaron C.
2009-01-01
Visualization is becoming more prevalent as an application in science, engineering, and technology related professions. The analysis of static and dynamic graphical visualization provides data solutions and understandings that go beyond traditional forms of communication. The study of technology-based content and the application of conceptual…
The Dynamic Discourse of Visual Literacy in Experience Design
ERIC Educational Resources Information Center
Search, Patricia
2009-01-01
Traditionally, people have used perspectives of space and time to define a sense of place and personal identity. Western cultures interpret place and time as static entities. In interactive multimedia computing, visual literacy defines new dimensions in communication that are reshaping traditional Western concepts of place and time. Experience…
An experimental study of the nonlinear dynamic phenomenon known as wing rock
NASA Technical Reports Server (NTRS)
Arena, A. S., Jr.; Nelson, R. C.; Schiff, L. B.
1990-01-01
An experimental investigation into the physical phenomena associated with limit cycle wing rock on slender delta wings has been conducted. The model used was a slender flat plate delta wing with 80-deg leading edge sweep. The investigation concentrated on three main areas: motion characteristics obtained from time history plots, static and dynamic flow visualization of vortex position, and static and dynamic flow visualization of vortex breakdown. The flow visualization studies are correlated with model motion to determine the relationship between vortex position and vortex breakdown with the dynamic rolling moments. Dynamic roll moment coefficient curves reveal rate-dependent hysteresis, which drives the motion. Vortex position correlated with time and model motion show a time lag in the normal position of the upward moving wing vortex. This time lag may be the mechanism responsible for the hysteresis. Vortex breakdown is shown to have a damping effect on the motion.
Photovoltaic restoration of sight in rodents with retinal degeneration (Conference Presentation)
NASA Astrophysics Data System (ADS)
Palanker, Daniel V.
2017-02-01
To restore vision in patients who lost their photoreceptors due to retinal degeneration, we developed a photovoltaic subretinal prosthesis which converts light into pulsed electric current, stimulating the nearby inner retinal neurons. Visual information is projected onto the retina by video goggles using pulsed near-infrared ( 900nm) light. This design avoids the use of bulky electronics and wiring, thereby greatly reducing the surgical complexity. Optical activation of the photovoltaic pixels allows scaling the implants to thousands of electrodes, and multiple modules can be tiled under the retina to expand the visual field. We found that similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies (>20Hz), adaptation to static images, and non-linear summation of subunits in the receptive fields. Photovoltaic arrays with 70um pixels restored visual acuity up to a single pixel pitch, which is only two times lower than natural acuity in rats. If these results translate to human retina, such implants could restore visual acuity up to 20/250. With eye scanning and perceptual learning, human patients might even cross the 20/200 threshold of legal blindness. In collaboration with Pixium Vision, we are preparing this system (PRIMA) for a clinical trial. To further improve visual acuity, we are developing smaller pixels - down to 40um, and on 3-D interface to improve proximity to the target neurons. Scalability, ease of implantation and tiling of these wireless modules to cover a large visual field, combined with high resolution opens the door to highly functional restoration of sight.
Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G
2000-06-01
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
Subramanian, Ramanathan; Shankar, Divya; Sebe, Nicu; Melcher, David
2014-03-26
A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips. Fixations were more narrowly focused for emotional clips, and immediate memory for object details was worse compared to matched neutral scenes, implying preferential attention to emotional events. Although we found the expected correlation between where people looked and what they remembered for neutral clips, this relationship broke down for emotional clips. When participants were subsequently presented with key frames (static images) extracted from the movie clips such that presentation duration of the target objects (TOs) corresponding to the multiple-choice questions was matched and the earlier questions were repeated, more fixations were observed on the TOs, and memory performance also improved significantly, confirming that emotion modulates the relationship between gaze position and memory performance. Finally, in a long-term memory test, old/new recognition performance was significantly better for emotional scenes as compared to neutral scenes. Overall, these results are consistent with the hypothesis that emotional content draws eye fixations and strengthens memory for the scene gist while weakening encoding of peripheral scene details.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX’s applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation. PMID:26192624
NASA Technical Reports Server (NTRS)
Lackner, J. R.; Dizio, P.
1998-01-01
We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.
2015-01-01
class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be
Uono, Shota; Sato, Wataru; Toichi, Motomi
2010-03-01
Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.
Early esophageal cancer detection using RF classifiers
NASA Astrophysics Data System (ADS)
Janse, Markus H. A.; van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.
2016-03-01
Esophageal cancer is one of the fastest rising forms of cancer in the Western world. Using High-Definition (HD) endoscopy, gastroenterology experts can identify esophageal cancer at an early stage. Recent research shows that early cancer can be found using a state-of-the-art computer-aided detection (CADe) system based on analyzing static HD endoscopic images. Our research aims at extending this system by applying Random Forest (RF) classification, which introduces a confidence measure for detected cancer regions. To visualize this data, we propose a novel automated annotation system, employing the unique characteristics of the previous confidence measure. This approach allows reliable modeling of multi-expert knowledge and provides essential data for real-time video processing, to enable future use of the system in a clinical setting. The performance of the CADe system is evaluated on a 39-patient dataset, containing 100 images annotated by 5 expert gastroenterologists. The proposed system reaches a precision of 75% and recall of 90%, thereby improving the state-of-the-art results by 11 and 6 percentage points, respectively.
Facial expression, size, and clutter: Inferences from movie structure to emotion judgments and back.
Cutting, James E; Armstrong, Kacie L
2016-04-01
The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies-shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter.
NASA Astrophysics Data System (ADS)
Yang, Eunice
2016-02-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body diagrams (FBDs) and provides solutions in real time. It is a cost-free software that is available for download on the Internet. The software is supported on the iOS™, Android™, and Google Chrome™ platforms. It is easy to use and the learning curve is approximately two hours using the tutorial provided within the app. The use of ForceEffect has the ability to provide students different problem modalities (textbook, real-world, and design) to help them acquire and improve on skills that are needed to solve force equilibrium problems. Although this paper focuses on the engineering mechanics statics course, the technology discussed is also relevant to the introductory physics course.
Antonova, A A; Absatova, K A; Korneev, A A; Kurgansky, A V
2015-01-01
The production of drawing movements was studied in 29 right-handed children of 9-to-11 years old. The movements were the sequences of horizontal and vertical linear stokes conjoined at right angle (open polygonal chains) referred to throughout the paper as trajectories. The length of a trajectory varied from 4 to 6. The trajectories were presented visually to a subject in static (linedrawing) and dynamic (moving cursor that leaves no trace) modes. The subjects were asked to draw (copy) a trajectory in response to delayed go-signal (short click) as fast as possible without lifting the pen. The production latency time, the average movement duration along a trajectory segment, and overall number of errors committed by a subject during trajectory production were analyzed. A comparison of children's data with similar data in adults (16 subjects) shows the following. First, a substantial reduction in error rate is observed in the age range between 9 and 11 years old for both static and dynamic modes of trajectory presentation, with children of 11 still committing more error than adults. Second, the averaged movement duration shortens with age while the latency time tends to increase. Third, unlike the adults, the children of 9-11 do not show any difference in latency time between static and dynamic modes of visual presentation of trajectories. The difference in trajectory production between adult and children is attributed to the predominant involvement of on-line programming in children and pre-programming in adults.
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
ERIC Educational Resources Information Center
Howard, William; Williams, Richard; Yao, Jason
2010-01-01
Solid modeling is widely used as a teaching tool in summer activities with high school students. The addition of motion analysis allows concepts from statics and dynamics to be introduced to students in both qualitative and quantitative ways. Two sets of solid modeling projects--carnival rides and Rube Goldberg machines--are shown to allow the…
ERIC Educational Resources Information Center
Korat, Ofra; Levin, Iris; Ben-Shabt, Anat; Shneor, Dafna; Bokovza, Limor
2014-01-01
We investigated the extent to which a dictionary embedded in an e-book with static or dynamic visuals with and without printed focal words affects word learning. A pretest-posttest design was used to measure gains of expressive words' meaning and their spelling. The participants included 250 Hebrew-speaking second graders from…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopatiuk-Tirpak, O.; Langen, K. M.; Meeks, S. L.
2008-09-15
The performance of a next-generation optical computed tomography scanner (OCTOPUS-5X) is characterized in the context of three-dimensional gel dosimetry. Large-volume (2.2 L), muscle-equivalent, radiation-sensitive polymer gel dosimeters (BANG-3) were used. Improvements in scanner design leading to shorter acquisition times are discussed. The spatial resolution, detectable absorbance range, and reproducibility are assessed. An efficient method for calibrating gel dosimeters using the depth-dose relationship is applied, with photon- and electron-based deliveries yielding equivalent results. A procedure involving a preirradiation scan was used to reduce the edge artifacts in reconstructed images, thereby increasing the useful cross-sectional area of the dosimeter by nearly amore » factor of 2. Dose distributions derived from optical density measurements using the calibration coefficient show good agreement with the treatment planning system simulations and radiographic film measurements. The feasibility of use for motion (four-dimensional) dosimetry is demonstrated on an example comparing dose distributions from static and dynamic delivery of a single-field photon plan. The capability to visualize three-dimensional dose distributions is also illustrated.« less
The effect of viewing a virtual environment through a head-mounted display on balance.
Robert, Maxime T; Ballaz, Laurent; Lemay, Martin
2016-07-01
In the next few years, several head-mounted displays (HMD) will be publicly released making virtual reality more accessible. HMD are expected to be widely popular at home for gaming but also in clinical settings, notably for training and rehabilitation. HMD can be used in both seated and standing positions; however, presently, the impact of HMD on balance remains largely unknown. It is therefore crucial to examine the impact of viewing a virtual environment through a HMD on standing balance. To compare static and dynamic balance in a virtual environment perceived through a HMD and the physical environment. The visual representation of the virtual environment was based on filmed image of the physical environment and was therefore highly similar. This is an observational study in healthy adults. No significant difference was observed between the two environments for static balance. However, dynamic balance was more perturbed in the virtual environment when compared to that of the physical environment. HMD should be used with caution because of its detrimental impact on dynamic balance. Sensorimotor conflict possibly explains the impact of HMD on balance. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wing, David J.
1998-01-01
The static internal performance of a multiaxis-thrust-vectoring, spherical convergent flap (SCF) nozzle with a non-rectangular divergent duct was obtained in the model preparation area of the Langley 16-Foot Transonic Tunnel. Duct cross sections of hexagonal and bowtie shapes were tested. Additional geometric parameters included throat area (power setting), pitch flap deflection angle, and yaw gimbal angle. Nozzle pressure ratio was varied from 2 to 12 for dry power configurations and from 2 to 6 for afterburning power configurations. Approximately a 1-percent loss in thrust efficiency from SCF nozzles with a rectangular divergent duct was incurred as a result of internal oblique shocks in the flow field. The internal oblique shocks were the result of cross flow generated by the vee-shaped geometric throat. The hexagonal and bowtie nozzles had mirror-imaged flow fields and therefore similar thrust performance. Thrust vectoring was not hampered by the three-dimensional internal geometry of the nozzles. Flow visualization indicates pitch thrust-vector angles larger than 10' may be achievable with minimal adverse effect on or a possible gain in resultant thrust efficiency as compared with the performance at a pitch thrust-vector angle of 10 deg.
Pupillary Responses to Static Images of Men and Women: A Possible Measure of Sexual Interest?
Snowden, Robert J; McKinnon, Aimee; Fitoussi, Julie; Gray, Nicola S
2017-12-08
The pupil dilates to images that are arousing. In Experiment 1, we examined if the pupil's response to brief presentations (2,000 ms) of static images could be used to identify individuals' sexual orientation. Participants were grouped according to their self-reported gender and sexual orientation (male heterosexual, N = 20; male bisexual, N = 13; male homosexual, N = 19; female heterosexual, N = 28; female bisexual, N = 21; female homosexual, N = 17). Pupil size was monitored to images of men in seminude poses, women in seminude poses, or neutral images. Every group showed the same pattern of responses, with the greatest dilation to male images, then female images, and least dilation to the neutral images. Experiment 2 used more tightly controlled stimuli and tested at two different image durations (150 and 3,000 ms). Both heterosexual men (N = 18) and women (N = 20) showed greater pupil dilation to images of nude men than to nude women. However, in Experiment 3, where we reduced the erotic content by using images of clothed models, both heterosexual men and women showed greater pupil dilation to images of women. The results showed that while the pupil does dilate strongly to sexual imagery, its response to these brief static images does not correspond to a person's sexual orientation in a simple manner.
LaZerte, Stefanie E; Reudink, Matthew W; Otter, Ken A; Kusack, Jackson; Bailey, Jacob M; Woolverton, Austin; Paetkau, Mark; de Jong, Adriaan; Hill, David J
2017-10-01
Radio frequency identification (RFID) provides a simple and inexpensive approach for examining the movements of tagged animals, which can provide information on species behavior and ecology, such as habitat/resource use and social interactions. In addition, tracking animal movements is appealing to naturalists, citizen scientists, and the general public and thus represents a tool for public engagement in science and science education. Although a useful tool, the large amount of data collected using RFID may quickly become overwhelming. Here, we present an R package (feedr) we have developed for loading, transforming, and visualizing time-stamped, georeferenced data, such as RFID data collected from static logger stations. Using our package, data can be transformed from raw RFID data to visits, presence (regular detections by a logger over time), movements between loggers, displacements, and activity patterns. In addition, we provide several conversion functions to allow users to format data for use in functions from other complementary R packages. Data can also be visualized through static or interactive maps or as animations over time. To increase accessibility, data can be transformed and visualized either through R directly, or through the companion site: http://animalnexus.ca, an online, user-friendly, R-based Shiny Web application. This system can be used by professional and citizen scientists alike to view and study animal movements. We have designed this package to be flexible and to be able to handle data collected from other stationary sources (e.g., hair traps, static very high frequency (VHF) telemetry loggers, observations of marked individuals in colonies or staging sites), and we hope this framework will become a meeting point for science, education, and community awareness of the movements of animals. We aim to inspire citizen engagement while simultaneously enabling robust scientific analysis.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
A Survey of Educational Uses of Molecular Visualization Freeware
ERIC Educational Resources Information Center
Craig, Paul A.; Michel, Lea Vacca; Bateman, Robert C.
2013-01-01
As biochemists, one of our most captivating teaching tools is the use of molecular visualization. It is a compelling medium that can be used to communicate structural information much more effectively with interactive animations than with static figures. We have conducted a survey to begin a systematic evaluation of the current classroom usage of…
Enhancing Interactive Tutorial Effectiveness through Visual Cueing
ERIC Educational Resources Information Center
Jamet, Eric; Fernandez, Jonathan
2016-01-01
The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…
Changing Perspective: Zooming in and out during Visual Search
ERIC Educational Resources Information Center
Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel
2013-01-01
Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…
ERIC Educational Resources Information Center
Lin, Huifen; Chen, Tsuiping
2007-01-01
The purpose of this experimental study was to compare the effects of different types of computer-generated visuals (static versus animated) and advance organizers (descriptive versus question) in enhancing comprehension and retention of a content-based lesson for learning English as a Foreign Language (EFL). Additionally, the study investigated…
Why Is Visual Search Superior in Autism Spectrum Disorder?
ERIC Educational Resources Information Center
Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.
2009-01-01
This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…
Visual and Ocular Control Anomalies in Relation to Reading Difficulty.
ERIC Educational Resources Information Center
Bedwell, C. H.; And Others
1980-01-01
The visual behavior under both static and dynamic viewing conditions was examined in a group of 13-year-old successful readers, compared with a group of the same age retarded in reading. Research supports the notion that problems of dynamic binocular vision and control while reading are important. (Author/KC)
ERIC Educational Resources Information Center
Lin, Huifen
2011-01-01
The purpose of this study was to investigate the relative effectiveness of different types of visuals (static and animated) and instructional strategies (no strategy, questions, and questions plus feedback) used to complement visualized materials on students' learning of different educational objectives in a computer-based instructional (CBI)…
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Piponnier, Jean-Claude; Hanssens, Jean-Marie; Faubert, Jocelyn
2009-01-14
To examine the respective roles of central and peripheral vision in the control of posture, body sway amplitude (BSA) and postural perturbations (given by velocity root mean square or vRMS) were calculated in a group of 19 healthy young adults. The stimulus was a 3D tunnel, either static or moving sinusoidally in the anterior-posterior direction. There were nine visual field conditions: four central conditions (4, 7, 15, and 30 degrees); four peripheral conditions (central occlusions of 4, 7, 15, and 30 degrees); and a full visual field condition (FF). The virtual tunnel respected all the aspects of a real physical tunnel (i.e., stereoscopy and size increase with proximity). The results show that, under static conditions, central and peripheral visual fields appear to have equal importance for the control of stance. In the presence of an optic flow, peripheral vision plays a crucial role in the control of stance, since it is responsible for a compensatory sway, whereas central vision has an accessory role that seems to be related to spatial orientation.
ERIC Educational Resources Information Center
Goff, Eric E.; Reindl, Katie M.; Johnson, Christina; McClean, Phillip; Offerdahl, Erika G.; Schroeder, Noah L.; White, Alan R.
2017-01-01
The use of external representations (ERs) to introduce concepts in undergraduate biology has become increasingly common. Two of the most prevalent are static images and dynamic animations. While previous studies comparing static images and dynamic animations have resulted in somewhat conflicting findings in regards to learning outcomes, the…
Mariappan, Leo; Hu, Gang; He, Bin
2014-01-01
Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
The role of movement in the recognition of famous faces.
Lander, K; Christie, F; Bruce, V
1999-11-01
The effects of movement on the recognition of famous faces shown in difficult conditions were investigated. Images were presented as negatives, upside down (inverted), and thresholded. Results indicate that, under all these conditions, moving faces were recognized significantly better than static ones. One possible explanation of this effect could be that a moving sequence contains more static information about the different views and expressions of the face than does a single static image. However, even when the amount of static information was equated (Experiments 3 and 4), there was still an advantage for moving sequences that contained their original dynamic properties. The results suggest that the dynamics of the motion provide additional information, helping to access an established familiar face representation. Both the theoretical and the practical implications for these findings are discussed.
Application of GFP technique for cytoskeleton visualization onboard the International Space Station.
Kordyum, E L; Shevchenko, G V; Yemets, A I; Nyporko, A I; Blume, Ya B
2005-03-01
Cytoskeleton recently attracted wide attention of cell and molecular biologists due to its crucial role in gravity sensing and trunsduction. Most of cytoskeletal research is conducted by the means of immunohistochemical reactions, different modifications of which are beneficial for the ground-based experiments. But for the performance onboard the space vehicles, they represent quite complicated technique which requires time and special skills for astronauts. In addition, immunocytochemistry provides only static images of the cytoskeleton arrangement in fixed cells while its localization in living cells is needed for the better understanding of cytoskeletal function. In this connection, we propose a new approach for cytoskeletal visualization onboard the ISS, namely, application of green fluorescent protein (GFP) from Aequorea victoria, which has the unique properties as a marker for protein localization in vivo. The creation of chimerical protein-GFP gene constructs, obtaining the transformed plant cells possessed protein-GFP in their cytoskeletal composition will allow receiving a simple and efficient model for screening of the cytoskeleton functional status in microgravity. c2004 Elsevier Ltd. All rights reserved.
Ultra-high resolution spectral domain optical coherence tomography using supercontinuum light source
NASA Astrophysics Data System (ADS)
Lim, Yiheng; Yatagai, Toyohiko; Otani, Yukitoshi
2016-04-01
An ultra-high resolution spectral domain optical coherence tomography (SD-OCT) was developed using a cost-effective supercontinuum laser. A spectral filter consists of a dispersive prism, a cylindrical lens and a right-angle prism was built to transmit the wavelengths in range 680-940 nm to the OCT system. The SD-OCT has achieved 1.9 μm axial resolution and the sensitivity was estimated to be 91.5 dB. A zero-crossing fringes matching method which maps the wavelengths to the pixel indices of the spectrometer was proposed for the OCT spectral calibration. A double sided foam tape as a static sample and the tip of a middle finger as a biological sample were measured by the OCT. The adhesive and the internal structure of the foam of the tape were successfully visualized in three dimensions. Sweat ducts was clearly observed in the OCT images at very high resolution. To the best of our knowledge, this is the first demonstration of ultra-high resolution visualization of sweat duct by OCT.
Universal brain systems for recognizing word shapes and handwriting gestures during reading
Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas
2012-01-01
Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998
Static sign language recognition using 1D descriptors and neural networks
NASA Astrophysics Data System (ADS)
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
2012-10-01
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
[Research on Spectral Polarization Imaging System Based on Static Modulation].
Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng
2015-04-01
The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.
A systematic review of visual image theory, assessment, and use in skin cancer and tanning research.
McWhirter, Jennifer E; Hoffman-Goetz, Laurie
2014-01-01
Visual images increase attention, comprehension, and recall of health information and influence health behaviors. Health communication campaigns on skin cancer and tanning often use visual images, but little is known about how such images are selected or evaluated. A systematic review of peer-reviewed, published literature on skin cancer and tanning was conducted to determine (a) what visual communication theories were used, (b) how visual images were evaluated, and (c) how visual images were used in the research studies. Seven databases were searched (PubMed/MEDLINE, EMBASE, PsycINFO, Sociological Abstracts, Social Sciences Full Text, ERIC, and ABI/INFORM) resulting in 5,330 citations. Of those, 47 met the inclusion criteria. Only one study specifically identified a visual communication theory guiding the research. No standard instruments for assessing visual images were reported. Most studies lacked, to varying degrees, comprehensive image description, image pretesting, full reporting of image source details, adequate explanation of image selection or development, and example images. The results highlight the need for greater theoretical and methodological attention to visual images in health communication research in the future. To this end, the authors propose a working definition of visual health communication.
Smithline, Howard A; Caglar, Selin; Blank, Fidela S J
2010-01-01
This study assessed the convergent validity of 2 dyspnea measures, the transition measure and the change measure, by comparing them with each other in patients admitted to the hospital with acute decompensated heart failure. Static measures of dyspnea were obtained at baseline (pre-static measure) and at time 1 hour and 4 hour (post-static measures). The change measure was calculated as the difference between the pre-static and post-static measures. Transition measures were obtained at time 1 hour and 4 hour. Visual analog scales and Likert scales were used. Both physicians and patients measured the dyspnea independently. A total of 112 patients had complete data sets at time 0 and 1 hour and 86 patients had complete data sets at all 3 time points. Correlations were calculated between the transition measures and static measures (pre-static, post-static, and change measure). Bland-Altman plots were generated and the mean difference and limits of agreement between the transition measures and the change measures were calculated. In general, short-term dyspnea assessment using transition measures and serial static measures can not be used to validate each other in this population of patients being admitted with acute decompensated heart failure. © 2010 Wiley Periodicals, Inc.
High-speed atomic force microscopy for observing protein molecules in dynamic action
NASA Astrophysics Data System (ADS)
Ando, T.
2017-02-01
Directly observing protein molecules in dynamic action at high spatiotemporal resolution has long been a holy grail for biological science. To materialize this long quested dream, I have been developing high-speed atomic force microscopy (HS-AFM) since 1993. Tremendous strides were recently accomplished in its high-speed and low-invasive performances. Consequently, various dynamic molecular actions, including bipedal walking of myosin V and rotary propagation of structural changes in F1-ATPase, were successfully captured on video. The visualized dynamic images not only provided irrefutable evidence for speculated actions of the protein molecules but also brought new discoveries inaccessible with other approaches, thus giving great mechanistic insights into how the molecules function. HS-AFM is now transforming "static" structural biology into dynamic structural bioscience.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
Vaz, Pedro G; Humeau-Heurtier, Anne; Figueiras, Edite; Correia, Carlos; Cardoso, João
2017-12-29
Laser speckle contrast imaging (LSCI) is a non-invasive microvascular blood flow assessment technique with good temporal and spatial resolution. Most LSCI systems, including commercial devices, can perform only qualitative blood flow evaluation, which is a major limitation of this technique. There are several factors that prevent the utilization of LSCI as a quantitative technique. Among these factors, we can highlight the effect of static scatterers. The goal of this work was to study the influence of differences in static and dynamic scatterer concentration on laser speckle correlation and contrast. In order to achieve this, a laser speckle prototype was developed and tested using an optical phantom with various concentrations of static and dynamic scatterers. It was found that the laser speckle correlation could be used to estimate the relative concentration of static/dynamic scatterers within a sample. Moreover, the speckle correlation proved to be independent of the dynamic scatterer velocity, which is a fundamental characteristic to be used in contrast correction.
NASA Astrophysics Data System (ADS)
Vaz, Pedro G.; Humeau-Heurtier, Anne; Figueiras, Edite; Correia, Carlos; Cardoso, João
2018-01-01
Laser speckle contrast imaging (LSCI) is a non-invasive microvascular blood flow assessment technique with good temporal and spatial resolution. Most LSCI systems, including commercial devices, can perform only qualitative blood flow evaluation, which is a major limitation of this technique. There are several factors that prevent the utilization of LSCI as a quantitative technique. Among these factors, we can highlight the effect of static scatterers. The goal of this work was to study the influence of differences in static and dynamic scatterer concentration on laser speckle correlation and contrast. In order to achieve this, a laser speckle prototype was developed and tested using an optical phantom with various concentrations of static and dynamic scatterers. It was found that the laser speckle correlation could be used to estimate the relative concentration of static/dynamic scatterers within a sample. Moreover, the speckle correlation proved to be independent of the dynamic scatterer velocity, which is a fundamental characteristic to be used in contrast correction.
Clinical applications at ultrahigh field (7 T). Where does it make the difference?
Trattnig, Siegfried; Bogner, Wolfgang; Gruber, Stephan; Szomolanyi, Pavol; Juras, Vladimir; Robinson, Simon; Zbýň, Štefan; Haneder, Stefan
2016-09-01
Presently, three major MR vendors provide commercial 7-T units for clinical research under ethical permission, with the number of operating 7-T systems having increased to over 50. This rapid increase indicates the growing interest in ultrahigh-field MRI because of improved clinical results with regard to morphological as well as functional and metabolic capabilities. As the signal-to-noise ratio scales linearly with the field strength (B0 ) of the scanner, the most obvious application at 7 T is to obtain higher spatial resolution in the brain, musculoskeletal system and breast. Of specific clinical interest for neuro-applications is the cerebral cortex at 7 T, for the detection of changes in cortical structure as a sign of early dementia, as well as for the visualization of cortical microinfarcts and cortical plaques in multiple sclerosis. In the imaging of the hippocampus, even subfields of the internal hippocampal anatomy and pathology can be visualized with excellent resolution. The dynamic and static blood oxygenation level-dependent contrast increases linearly with the field strength, which significantly improves the pre-surgical evaluation of eloquent areas before tumor removal. Using susceptibility-weighted imaging, the plaque-vessel relationship and iron accumulation in multiple sclerosis can be visualized for the first time. Multi-nuclear clinical applications, such as sodium imaging for the evaluation of repair tissue quality after cartilage transplantation and (31) P spectroscopy for the differentiation between non-alcoholic benign liver disease and potentially progressive steatohepatitis, are only possible at ultrahigh fields. Although neuro- and musculoskeletal imaging have already demonstrated the clinical superiority of ultrahigh fields, whole-body clinical applications at 7 T are still limited, mainly because of the lack of suitable coils. The purpose of this article was therefore to review the clinical studies that have been performed thus far at 7 T, compared with 3 T, as well as those studies performed at 7 T that cannot be routinely performed at 3 T. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Ketelsen, D; Werner, M K; Thomas, C; Tsiflikas, I; Koitschev, A; Reimann, A; Claussen, C D; Heuschmid, M
2009-01-01
Important oropharyngeal structures can be superimposed by metallic artifacts due to dental implants. The aim of this study was to compare the image quality of multiplanar reconstructions and an angulated spiral in dual-source computed tomography (DSCT) of the neck. Sixty-two patients were included for neck imaging with DSCT. MPRs from an axial dataset and an additional short spiral parallel to the mouth floor were acquired. Leading anatomical structures were then evaluated with respect to the extent to which they were affected by dental artifacts using a visual scale, ranging from 1 (least artifacts) to 4 (most artifacts). In MPR, 87.1 % of anatomical structures had significant artifacts (3.12 +/- 0.86), while in angulated slices leading anatomical structures of the oropharynx showed negligible artifacts (1.28 +/- 0.46). The diagnostic growth due to primarily angulated slices concerning artifact severity was significant (p < 0.01). MPRs are not capable of reducing dental artifacts sufficiently. In patients with dental artifacts overlying the anatomical structures of the oropharynx, an additional short angulated spiral parallel to the floor of the mouth is recommended and should be applied for daily routine. As a result of the static gantry design of DSCT, the use of a flexible head holder is essential.
Web-Based Geospatial Visualization of GPM Data with CesiumJS
NASA Technical Reports Server (NTRS)
Lammers, Matt
2018-01-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology have made online visualization of large geospatial datasets such as those coming from precipitation satellites viable. These data benefit from being visualized on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS (http://cesiumjs.org), developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. This presentation will describe how CesiumJS has been used in three-dimensional visualization products developed as part of the NASA Precipitation Processing System (PPS) STORM data-order website. Existing methods of interacting with Global Precipitation Measurement (GPM) Mission data primarily focus on two-dimensional static images, whether displaying vertical slices or horizontal surface/height-level maps. These methods limit interactivity with the robust three-dimensional data coming from the GPM core satellite. Integrating the data with CesiumJS in a web-based user interface has allowed us to create the following products. We have linked with the data-order interface an on-the-fly visualization tool for any GPM/partner satellite orbit. A version of this tool also focuses on high-impact weather events. It enables viewing of combined radar and microwave-derived precipitation data on mobile devices and in a way that can be embedded into other websites. We also have used CesiumJS to visualize a method of integrating gridded precipitation data with modeled wind speeds that animates over time. Emphasis in the presentation will be placed on how a variety of technical methods were used to create these tools, and how the flexibility of the CesiumJS framework facilitates creative approaches to interact with the data.
Visual flow scene effects on the somatogravic illusion in non-pilots.
Eriksson, Lars; von Hofsten, Claes; Tribukait, Arne; Eiken, Ola; Andersson, Peter; Hedström, Johan
2008-09-01
The somatogravic illusion (SGI) is easily broken when the pilot looks out the aircraft window during daylight flight, but it has proven difficult to break or even reduce the SGI in non-pilots in simulators using synthetic visual scenes. Could visual-flow scenes that accommodate compensatory head movement reduce the SGI in naive subjects? We investigated the effects of visual cues on the SGI induced by a human centrifuge. The subject was equipped with a head-tracked, head-mounted display (HMD) and was seated in a fixed gondola facing the center of rotation. The angular velocity of the centrifuge increased from near zero until a 0.57-G centripetal acceleration was attained, resulting in a tilt of the gravitoinertial force vector, corresponding to a pitch-up of 30 degrees. The subject indicated perceived horizontal continuously by means of a manual adjustable-plate system. We performed two experiments with within-subjects designs. In Experiment 1, the subjects (N = 13) viewed a darkened HMD and a presentation of simple visual flow beneath a horizon. In Experiment 2, the subjects (N = 12) viewed a darkened HMD, a scene including symbology superimposed on simple visual flow and horizon, and this scene without visual flow (static). In Experiment 1, visual flow reduced the SGI from 12.4 +/- 1.4 degrees (mean +/- SE) to 8.7 +/- 1.5 degrees. In Experiment 2, the SGI was smaller in the visual flow condition (9.3 +/- 1.8 degrees) than with the static scene (13.3 +/- 1.7 degrees) and without HMD presentation (14.5 +/- 2.3 degrees), respectively. It is possible to reduce the SGI in non-pilots by means of a synthetic horizon and simple visual flow conveyed by a head-tracked HMD. This may reflect the power of a more intuitive display for reducing the SGI.
Putting Motion in Emotion: Do Dynamic Presentations Increase Preschooler's Recognition of Emotion?
ERIC Educational Resources Information Center
Nelson, Nicole L.; Russell, James A.
2011-01-01
In prior research, preschoolers were surprisingly poor at naming the emotion purportedly signaled by prototypical facial expressions--when shown as static images. To determine whether this poor performance is due to the use of static stimuli, rather than dynamic, we presented preschoolers (3-5 years) with facial expressions as either static images…
Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady
2015-01-01
The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.
Numerosity underestimation with item similarity in dynamic visual display.
Au, Ricky K C; Watanabe, Katsumi
2013-01-01
The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.
Static telescope aberration measurement using lucky imaging techniques
NASA Astrophysics Data System (ADS)
López-Marrero, Marcos; Rodríguez-Ramos, Luis Fernando; Marichal-Hernández, José Gil; Rodríguez-Ramos, José Manuel
2012-07-01
A procedure has been developed to compute static aberrations once the telescope PSF has been measured with the lucky imaging technique, using a nearby star close to the object of interest as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the pupil using the Gerchberg-Saxton algorithm and then converted to the appropriate actuation information for a deformable mirror having low actuator number but large stroke capability. The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing direction and without the need of a wavefront sensor.
Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T
2015-01-01
Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.
In Vivo Multiphoton Microscopy for Investigating Biomechanical Properties of Human Skin.
Liang, Xing; Graf, Benedikt W; Boppart, Stephen A
2011-06-01
The biomechanical properties of living cells depend on their molecular building blocks, and are important for maintaining structure and function in cells, the extracellular matrix, and tissues. These biomechanical properties and forces also shape and modify the cellular and extracellular structures under stress. While many studies have investigated the biomechanics of single cells or small populations of cells in culture, or the properties of organs and tissues, few studies have investigated the biomechanics of complex cell populations in vivo. With the use of advanced multiphoton microscopy to visualize in vivo cell populations in human skin, the biomechanical properties are investigated in a depth-dependent manner in the stratum corneum and epidermis using quasi-static mechanical deformations. A 2D elastic registration algorithm was used to analyze the images before and after deformation to determine displacements in different skin layers. In this feasibility study, the images and results from one human subject demonstrate the potential of the technique for revealing differences in elastic properties between the stratum corneum and the rest of the epidermis. This interrogational imaging methodology has the potential to enable a wide range of investigations for understanding how the biomechanical properties of in vivo cell populations influence function in health and disease.
Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.
Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael
2016-11-01
To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.
NASA Astrophysics Data System (ADS)
Yarden, Hagit; Yarden, Anat
2010-05-01
The importance of biotechnology education at the high-school level has been recognized in a number of international curriculum frameworks around the world. One of the most problematic issues in learning biotechnology has been found to be the biotechnological methods involved. Here, we examine the unique contribution of an animation of the polymerase chain reaction (PCR) in promoting conceptual learning of the biotechnological method among 12th-grade biology majors. All of the students learned about the PCR using still images ( n = 83) or the animation ( n = 90). A significant advantage to the animation treatment was identified following learning. Students’ prior content knowledge was found to be an important factor for students who learned PCR using still images, serving as an obstacle to learning the PCR method in the case of low prior knowledge. Through analysing students’ discourse, using the framework of the conceptual status analysis, we found that students who learned about PCR using still images faced difficulties in understanding some mechanistic aspects of the method. On the other hand, using the animation gave the students an advantage in understanding those aspects.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Hun Bok; Kabilan, Senthil; Carson, James P.
2014-08-07
Composite Portland cement-basalt caprock cores with fractures, as well as neat Portland cement columns, were prepared to understand the geochemical and geomechanical effects on the integrity of wellbores with defects during geologic carbon sequestration. The samples were reacted with CO2-saturated groundwater at 50 ºC and 10 MPa for 3 months under static conditions, while one cement-basalt core was subjected to mechanical stress at 2.7 MPa before the CO2 reaction. Micro-XRD and SEM-EDS data collected along the cement-basalt interface after 3-month reaction with CO2-saturated groundwater indicate that carbonation of cement matrix was extensive with the precipitation of calcite, aragonite, and vaterite,more » whereas the alteration of basalt caprock was minor. X-ray microtomography (XMT) provided three-dimensional (3-D) visualization of the opening and interconnection of cement fractures due to mechanical stress. Computational fluid dynamics (CFD) modeling further revealed that this stress led to the increase in fluid flow and hence permeability. After the CO2-reaction, XMT images displayed that calcium carbonate precipitation occurred extensively within the fractures in the cement matrix, but only partially along the fracture located at the cement-basalt interface. The 3-D visualization and CFD modeling also showed that the precipitation of calcium carbonate within the cement fractures after the CO2-reaction resulted in the disconnection of cement fractures and permeability decrease. The permeability calculated based on CFD modeling was in agreement with the experimentally determined permeability. This study demonstrates that XMT imaging coupled with CFD modeling represent a powerful tool to visualize and quantify fracture evolution and permeability change in geologic materials and to predict their behavior during geologic carbon sequestration or hydraulic fracturing for shale gas production and enhanced geothermal systems.« less
Short-Term Memory Trace in Rapidly Adapting Synapses of Inferior Temporal Cortex
Sugase-Miyamoto, Yasuko; Liu, Zheng; Wiener, Matthew C.; Optican, Lance M.; Richmond, Barry J.
2008-01-01
Visual short-term memory tasks depend upon both the inferior temporal cortex (ITC) and the prefrontal cortex (PFC). Activity in some neurons persists after the first (sample) stimulus is shown. This delay-period activity has been proposed as an important mechanism for working memory. In ITC neurons, intervening (nonmatching) stimuli wipe out the delay-period activity; hence, the role of ITC in memory must depend upon a different mechanism. Here, we look for a possible mechanism by contrasting memory effects in two architectonically different parts of ITC: area TE and the perirhinal cortex. We found that a large proportion (80%) of stimulus-selective neurons in area TE of macaque ITCs exhibit a memory effect during the stimulus interval. During a sequential delayed matching-to-sample task (DMS), the noise in the neuronal response to the test image was correlated with the noise in the neuronal response to the sample image. Neurons in perirhinal cortex did not show this correlation. These results led us to hypothesize that area TE contributes to short-term memory by acting as a matched filter. When the sample image appears, each TE neuron captures a static copy of its inputs by rapidly adjusting its synaptic weights to match the strength of their individual inputs. Input signals from subsequent images are multiplied by those synaptic weights, thereby computing a measure of the correlation between the past and present inputs. The total activity in area TE is sufficient to quantify the similarity between the two images. This matched filter theory provides an explanation of what is remembered, where the trace is stored, and how comparison is done across time, all without requiring delay period activity. Simulations of a matched filter model match the experimental results, suggesting that area TE neurons store a synaptic memory trace during short-term visual memory. PMID:18464917
Acute Zonal Cone Photoreceptor Outer Segment Loss.
Aleman, Tomas S; Sandhu, Harpal S; Serrano, Leona W; Traband, Anastasia; Lau, Marisa K; Adamus, Grazyna; Avery, Robert A
2017-05-01
The diagnostic path presented narrows down the cause of acute vision loss to the cone photoreceptor outer segment and will refocus the search for the cause of similar currently idiopathic conditions. To describe the structural and functional associations found in a patient with acute zonal occult photoreceptor loss. A case report of an adolescent boy with acute visual field loss despite a normal fundus examination performed at a university teaching hospital. Results of a complete ophthalmic examination, full-field flash electroretinography (ERG) and multifocal ERG, light-adapted achromatic and 2-color dark-adapted perimetry, and microperimetry. Imaging was performed with spectral-domain optical coherence tomography (SD-OCT), near-infrared (NIR) and short-wavelength (SW) fundus autofluorescence (FAF), and NIR reflectance (REF). The patient was evaluated within a week of the onset of a scotoma in the nasal field of his left eye. Visual acuity was 20/20 OU, and color vision was normal in both eyes. Results of the fundus examination and of SW-FAF and NIR-FAF imaging were normal in both eyes, whereas NIR-REF imaging showed a region of hyporeflectance temporal to the fovea that corresponded with a dense relative scotoma noted on light-adapted static perimetry in the left eye. Loss in the photoreceptor outer segment detected by SD-OCT co-localized with an area of dense cone dysfunction detected on light-adapted perimetry and multifocal ERG but with near-normal rod-mediated vision according to results of 2-color dark-adapted perimetry. Full-field flash ERG findings were normal in both eyes. The outer nuclear layer and inner retinal thicknesses were normal. Localized, isolated cone dysfunction may represent the earliest photoreceptor abnormality or a distinct entity within the acute zonal occult outer retinopathy complex. Acute zonal occult outer retinopathy should be considered in patients with acute vision loss and abnormalities on NIR-REF imaging, especially if multimodal imaging supports an intact retinal pigment epithelium and inner retina but an abnormal photoreceptor outer segment.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narashimha S.
2013-01-01
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Gravity in the Brain as a Reference for Space and Time Perception.
Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka
2015-01-01
Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.
Precipitation-Static-Reduction Research
1943-03-31
if» 85 z \\ PRECIPITATION-STATIC-REDUCTION RESEARCH study of the effects of flame length , flame spacing, and burner spacing on B shows that there...unod: Flame length *. The visual length of the flame from the burner tip to the flame tip when examined in a darkened room against a black background...Postlve and Negative Flames The use of the second flame-conduction coefficient, B, facilitates considerably the study of the effect of flame length , spacing
Beaked Whale Habitat Characterization and Prediction
2005-09-30
trying to develop a better understanding of beaked whale distribution. For long - range planning, the static habitat prediction maps provide a broad... whale presence ranged from 79.3% to 100.0% for the static models and 85.7% to 94.5% for the dynamic models. Beaked whale habitat prediction has been...submerged for such long periods of time that there is a high probability that they will never surface within the visual range of observers aboard a
Patla, Aftab E; Greig, Michael
In the two experiments discussed in this paper we quantified obstacle avoidance performance characteristics carried out open loop (without vision) but with different initial visual sampling conditions and compared it to the full vision condition. The initial visual sampling conditions included: static vision (SV), vision during forward walking for three steps and stopping (FW), vision during forward walking for three steps and not stopping (FW-NS), and vision during backward walking for three steps and stopping (BW). In experiment 1, we compared performance during SV, FW and BW with full vision condition, while in the second experiment we compared performance during FW and FW-NS conditions. The questions we wanted to address are: Is ecologically valid dynamic visual sampling of the environment superior to static visual sampling for open loop obstacle avoidance task? What are the reasons for failure in performing open loop obstacle avoidance task? The results showed that irrespective of the initial visual sampling condition when open loop control is initiated from a standing posture, the success rate was only approximately 50%. The main reason for the high failure rates was not inappropriate limb elevation, but incorrect foot placement before the obstacle. The second experiment showed that it is not the nature of visual sampling per se that influences success rate, but the fact that the open loop obstacle avoidance task is initiated from a standing posture. The results of these two experiments clearly demonstrate the importance of on-line visual information for adaptive human locomotion.
Results from the balance rehabilitation unit in benign paroxysmal positional vertigo.
Kasse, Cristiane Akemi; Santana, Graziela Gaspar; Scharlach, Renata Coelho; Gazzola, Juliana Maria; Branco, Fátima Cristina Barreiro; Doná, Flávia
2010-01-01
Posturography is a useful new tool to study the influence of vestibular diseases on balance. to compare the results from the Balance Rehabilitation Unit (BRU) static posturography in elderly patients with Benign Paroxysmal Positional Vertigo (BPPV), before and after Epley's maneuver. a prospective study of 20 elderly patients with a diagnosis of BPPV. The patients underwent static posturography and the limit of stability (LE) and ellipse area were measured. We also applied the Dizziness Handicap Inventory (DHI) questionnaire to study treatment effectiveness. 80% were females, with a mean age of 68.15 years. After the maneuver, the LE increased significantly (p=0.001). The elliptical area of somatosensory, visual and vestibular conflicts (2,7,8,9 situations) in BRU and the DHI scores decreased significantly (p<0.05) after treatment. the study suggests that elderly patients with BPPV may present static postural control impairment and that the maneuver is effective for the remission of symptoms, to increase in the stability and improvement in postural control in situations of visual, somatosensory and vestibular conflicts.
Optic nerve dysfunction during gravity inversion. Visual field abnormalities.
Sanborn, G E; Friberg, T R; Allen, R
1987-06-01
Inversion in a head-down position (gravity inversion) results in an intraocular pressure of 35 to 40 mm Hg in normal subjects. We used computerized static perimetry to measure the visual fields of normal subjects during gravity inversion. There were no visual field changes in the central 6 degrees of the visual field compared with the baseline (preinversion) values. However, when the central 30 degrees of the visual field was tested, reversible visual field defects were found in 11 of 19 eyes. We believe that the substantial elevation of intraocular pressure during gravity inversion may pose potential risks to the eyes, and we recommend that inversion for extended periods of time be avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, Paul Michael
2016-08-31
The project goals seek to develop applications in order to automate MCNP criticality benchmark execution; create a dataset containing static benchmark information; combine MCNP output with benchmark information; and fit and visually represent data.
The effect of animation on learning action symbols by individuals with intellectual disabilities.
Fujisawa, Kazuko; Inoue, Tomoyoshi; Yamana, Yuko; Hayashi, Humirhiro
2011-03-01
The purpose of the present study was to investigate whether participants with intellectual impairments could benefit from the movement associated with animated pictures while they were learning symbol names. Sixteen school students, whose linguistic-developmental age ranged from 38?91 months, participated in the experiment. They were taught 16 static visual symbols and the corresponding action words (naming task) in two sessions conducted one week apart. In the experimental condition, animation was employed to facilitate comprehension, whereas no animation was used in the control condition. Enhancement of learning was shown in the experimental condition, suggesting that the participants benefited from animated symbols. Furthermore, it was found that the lower the linguistic developmental age, the more effective the animated cue was in learning static visual symbols.
Red to Green or Fast to Slow? Infants' Visual Working Memory for "Just Salient Differences"
ERIC Educational Resources Information Center
Kaldy, Zsuzsa; Blaser, Erik
2013-01-01
In this study, 6-month-old infants' visual working memory for a static feature (color) and a dynamic feature (rotational motion) was compared. Comparing infants' use of different features can only be done properly if experimental manipulations to those features are equally salient (Kaldy & Blaser, 2009; Kaldy, Blaser, & Leslie,…
Flow Visualization of Dynamic Stall on an Oscillating Airfoil
1989-09-01
Dynamic Stall; Dynamic lift, ’Unsteady lift; Helicopter retreating blade stall; Oscillating airfoil ; Flow visualization,’Schlieren method ;k ez.S-,’ .0...the degree of MASTER OF SCIENCE IN AERONAUTICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL September 1989 Author...and moment behavior is quite different from the static stall associated with fixed-wing airfoils . Helicopter retreating blade stall is a dynamic
Williams, Isla M; Schofield, Peter; Khade, Neha; Abel, Larry A
2016-12-01
Multiple sclerosis (MS) frequently causes impairment of cognitive function. We compared patients with MS with controls on divided visual attention tasks. The MS patients' and controls' stare optokinetic nystagmus (OKN) was recorded in response to a 24°/s full field stimulus. Suppression of the OKN response, judged by the gain, was measured during tasks dividing visual attention between the fixation target and a second stimulus, central or peripheral, static or dynamic. All participants completed the Audio Recorded Cognitive Screen. MS patients had lower gain on the baseline stare OKN. OKN suppression in divided attention tasks was the same in MS patients as in controls but in both groups was better maintained in static than in dynamic tasks. In only dynamic tasks, older age was associated with less effective OKN suppression. MS patients had lower scores on a timed attention task and on memory. There was no significant correlation between attention or memory and eye movement parameters. Attention, a complex multifaceted construct, has different neural combinations for each task. Despite impairments on some measures of attention, MS patients completed the divided visual attention tasks normally. Copyright © 2016 Elsevier Ltd. All rights reserved.
Real-Time Visualization of an HPF-based CFD Simulation
NASA Technical Reports Server (NTRS)
Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.
Medial tibial pain: a dynamic contrast-enhanced MRI study.
Mattila, K T; Komu, M E; Dahlström, S; Koskinen, S K; Heikkilä, J
1999-09-01
The purpose of this study was to compare the sensitivity of different magnetic resonance imaging (MRI) sequences to depict periosteal edema in patients with medial tibial pain. Additionally, we evaluated the ability of dynamic contrast-enhanced imaging (DCES) to depict possible temporal alterations in muscular perfusion within compartments of the leg. Fifteen patients with medial tibial pain were examined with MRI. T1-, T2-weighted, proton density axial images and dynamic and static phase post-contrast images were compared in ability to depict periosteal edema. STIR was used in seven cases to depict bone marrow edema. Images were analyzed to detect signs of compartment edema. Region-of-interest measurements in compartments were performed during DCES and compared with controls. In detecting periosteal edema, post-contrast T1-weighted images were better than spin echo T2-weighted and proton density images or STIR images, but STIR depicted the bone marrow edema best. DCES best demonstrated the gradually enhancing periostitis. Four subjects with severe periosteal edema had visually detectable pathologic enhancement during DCES in the deep posterior compartment of the leg. Percentage enhancement in the deep posterior compartment of the leg was greater in patients than in controls. The fast enhancement phase in the deep posterior compartment began slightly slower in patients than in controls, but it continued longer. We believe that periosteal edema in bone stress reaction can cause impairment of venous flow in the deep posterior compartment. MRI can depict both these conditions. In patients with medial tibial pain, MR imaging protocol should include axial STIR images (to depict bone pathology) with T1-weighted axial pre and post-contrast images, and dynamic contrast enhanced imaging to show periosteal edema and abnormal contrast enhancement within a compartment.
Visual Pattern Analysis in Histopathology Images Using Bag of Features
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.
This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.
Images of the laser entrance hole from the static x-ray imager at NIF.
Schneider, M B; Jones, O S; Meezan, N B; Milovich, J L; Town, R P; Alvarez, S S; Beeler, R G; Bradley, D K; Celeste, J R; Dixit, S N; Edwards, M J; Haugh, M J; Kalantar, D H; Kline, J L; Kyrala, G A; Landen, O L; MacGowan, B J; Michel, P; Moody, J D; Oberhelman, S K; Piston, K W; Pivovaroff, M J; Suter, L J; Teruya, A T; Thomas, C A; Vernon, S P; Warrick, A L; Widmann, K; Wood, R D; Young, B K
2010-10-01
The static x-ray imager at the National Ignition Facility is a pinhole camera using a CCD detector to obtain images of Hohlraum wall x-ray drive illumination patterns seen through the laser entrance hole (LEH). Carefully chosen filters, combined with the CCD response, allow recording images in the x-ray range of 3-5 keV with 60 μm spatial resolution. The routines used to obtain the apparent size of the backlit LEH and the location and intensity of beam spots are discussed and compared to predictions. A new soft x-ray channel centered at 870 eV (near the x-ray peak of a 300 eV temperature ignition Hohlraum) is discussed.
Kinect-based sign language recognition of static and dynamic hand movements
NASA Astrophysics Data System (ADS)
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
2017-02-01
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Igase, Keiji; Kumon, Yoshiaki; Matsubara, Ichiro; Arai, Masamori; Goishi, Junji; Watanabe, Hideaki; Ohnishi, Takanori; Sadamoto, Kazuhiko
2015-01-01
We evaluated the utility of 3-dimensional (3-D) ultrasound imaging for assessment of carotid artery stenosis, as compared with similar assessment via magnetic resonance angiography (MRA). Subjects comprised 58 patients with carotid stenosis who underwent both 3-D ultrasound imaging and MRA. We studied whether abnormal findings detected by ultrasound imaging could be diagnosed using MRA. Ultrasound images were generated using Voluson 730 Expert and Voluson E8. The degree of stenosis was mild in 17, moderate in 16, and severe in 25 patients, according to ultrasound imaging. Stenosis could not be recognized using MRA in 4 of 17 patients diagnosed with mild stenosis using ultrasound imaging. Ultrasound imaging showed ulceration in 13 patients and mobile plaque in 6 patients. When assessing these patients, MRA showed ulceration in only 2 of 13 patients and did not detect mobile plaque in any of these 6 patients. Static 3-D B mode images demonstrated distributions of plaque, ulceration, and mobile plaque, and static 3-D flow images showed flow configuration as a total structure. Real-time 3-D B mode images demonstrated plaque and vessel movement. Carotid artery stenting was not selected for patients diagnosed with ulceration or mobile plaque. Ultrasound imaging was necessary to detect mild stenosis, ulcerated plaque, or mobile plaque in comparison with MRA, and 3-D ultrasound imaging was useful to recognize carotid stenosis and flow pattern as a total structure by static and real-time 3-D demonstration. This information may contribute to surgical planning. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Experimental data of the static behavior of reinforced concrete beams at room and low temperature.
Mirzazadeh, M Mehdi; Noël, Martin; Green, Mark F
2016-06-01
This article provides data on the static behavior of reinforced concrete at room and low temperature including, strength, ductility, and crack widths of the reinforced concrete. The experimental data on the application of digital image correlation (DIC) or particle image velocimetry (PIV) in measuring crack widths and the accuracy and precision of DIC/PIV method with temperature variations when is used for measuring strains is provided as well.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne
A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
RVA: A Plugin for ParaView 3.14
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-04
RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
Khazendar, S.; Sayasneh, A.; Al-Assam, H.; Du, H.; Kaijser, J.; Ferrara, L.; Timmerman, D.; Jassim, S.; Bourne, T.
2015-01-01
Introduction: Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. Objectives: In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Materials and methods: Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. Results: The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). Conclusion: We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered. PMID:25897367
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tee, James J L; Yang, Yesa; Kalitzeos, Angelos; Webster, Andrew; Bainbridge, James; Weleber, Richard G; Michaelides, Michel
2018-05-01
To characterize bilateral visual function, interocular variability and progression by using static perimetry-derived volumetric and pointwise metrics in subjects with retinitis pigmentosa associated with mutations in the retinitis pigmentosa GTPase regulator (RPGR) gene. This was a prospective longitudinal observational study of 47 genetically confirmed subjects. Visual function was assessed with ETDRS and Pelli-Robson charts; and Octopus 900 static perimetry using a customized, radially oriented 185-point grid. Three-dimensional hill-of-vision topographic models were produced and interrogated with the Visual Field Modeling and Analysis software to obtain three volumetric metrics: VTotal, V30, and V5. These were analyzed together with Octopus mean sensitivity values. Interocular differences were assessed with the Bland-Altman method. Metric-specific exponential decline rates were calculated. Baseline symmetry was demonstrated by relative interocular difference values of 1% for VTotal and 8% with V30. Degree of symmetry varied between subjects and was quantified with the subject percentage interocular difference (SPID). SPID was 16% for VTotal and 17% for V30. Interocular symmetry in progression was greatest when quantified by VTotal and V30, with 73% and 64% of subjects possessing interocular rate differences smaller in magnitude than respective annual progression rates. Functional decline was evident with increasing age. An overall annual exponential decline of 6% was evident with both VTotal and V30. In general, good interocular symmetry exists; however, there was both variation between subjects and with the use of various metrics. Our findings will guide patient selection and design of RPGR treatment trials, and provide clinicians with specific prognostic information to offer patients affected by this condition.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
NASA Astrophysics Data System (ADS)
García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman
2014-03-01
Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals. PMID:27792731
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals-individually or in combination with other signals-to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets' ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.
Modeling of the laser device for the stress therapy
NASA Astrophysics Data System (ADS)
Matveev, Nikolai V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.
2017-05-01
Recently there is a great interest to the drug-free methods of treatment of various diseases. For example, audiovisual therapy is used for the stress therapy. The main destination of the method is the health care and well-being. Visual content in the given case is formed when laser radiation is passing through the optical mediums and elements. The therapy effect is achieved owing to the color varying and complicated structure of the picture which is produced by the refraction, dispersion effects, diffraction and interference. As the laser source we use three laser sources with wavelengths of 445 nm, 520 nm and 640 nm and the optical power up to 1 W. The beam is guided to the optical element which is responsible for the final image of the dome surface. The dynamic image can be achieved by the rotating of the optical element when the laser beam is static or by scanning the surface of the element. Previous research has shown that the complexity of the image connected to the therapy effect. The image was chosen experimentally in practice. The evaluation was performed using the fractal dimension calculation for the produced image. In this work we model the optical image on the surface formed by the laser sources together with the optical elements. Modeling is performed in two stages. On the first stage we perform the simple modeling taking into account simple geometrical effects and specify the optical models of the sources.
Evaluation of the deformation and corresponding dosimetric implications in prostate cancer treatment
NASA Astrophysics Data System (ADS)
Wen, Ning; Glide-Hurst, Carri; Nurushev, Teamour; Xing, Lei; Kim, Jinkoo; Zhong, Hualiang; Liu, Dezhi; Liu, Manju; Burmeister, Jay; Movsas, Benjamin; Chetty, Indrin J.
2012-09-01
The cone-beam computed tomography (CBCT) imaging modality is an integral component of image-guided adaptive radiation therapy (IGART), which uses patient-specific dynamic/temporal information for potential treatment plan modification. In this study, an offline process for the integral component IGART framework has been implemented that consists of deformable image registration (DIR) and its validation, dose reconstruction, dose accumulation and dose verification. This study compares the differences between planned and estimated delivered doses under an IGART framework of five patients undergoing prostate cancer radiation therapy. The dose calculation accuracy on CBCT was verified by measurements made in a Rando pelvic phantom. The accuracy of DIR on patient image sets was evaluated in three ways: landmark matching with fiducial markers, visual image evaluation and unbalanced energy (UE); UE has been previously demonstrated to be a feasible method for the validation of DIR accuracy at a voxel level. The dose calculated on each CBCT image set was reconstructed and accumulated over all fractions to reflect the ‘actual dose’ delivered to the patient. The deformably accumulated (delivered) plans were then compared to the original (static) plans to evaluate tumor and normal tissue dose discrepancies. The results support the utility of adaptive planning, which can be used to fully elucidate the dosimetric impact based on the simulated delivered dose to achieve the desired tumor control and normal tissue sparing, which may be of particular importance in the context of hypofractionated radiotherapy regimens.
Transpiration cooling in the locality of a transverse fuel jet for supersonic combustors
NASA Technical Reports Server (NTRS)
Northam, G. Burton; Capriotti, Diego P.; Byington, Carl S.
1990-01-01
The objective of the current work was to determine the feasibility of transpiration cooling for the relief of the local heating rates in the region of a sonic, perpendicular, fuel jet of gaseous hydrogen. Experiments were conducted to determine the interaction between the cooling required and flameholding limits of a transverse jet in a high-enthalpy, Mach 3 flow in both open-jet and direct-connect test mode. Pulsed shadowgraphs were used to illustrate the flow field. Infrared thermal images indicated the surface temperatures, and the OH(-) emission of the flame was used to visualize the limits of combustion. Wall, static presures indicated the location of the combustion within the duct and were used to calculate the combustion efficiency. The results from both series of tests at facility total temperatures of 1700 K and 2000 K are presented.
Umile, Eric M; Sandel, M Elizabeth; Alavi, Abass; Terry, Charles M; Plotkin, Rosette C
2002-11-01
To determine whether patients with mild traumatic brain injury (TBI) and persistent postconcussive symptoms have evidence of temporal lobe injury on dynamic imaging. Case series. An academic medical center. Twenty patients with a clinical diagnosis of mild TBI and persistent postconcussive symptoms were referred for neuropsychologic evaluation and dynamic imaging. Fifteen (75%) had normal magnetic resonance imaging (MRI) and/or computed tomography (CT) scans at the time of injury. Neuropsychologic testing, positron-emission tomography (PET), and single-photon emission-computed tomography (SPECT). Temporal lobe findings on static imaging (MRI, CT) and dynamic imaging (PET, SPECT); neuropsychologic test findings on measures of verbal and visual memory. Testing documented neurobehavioral deficits in 19 patients (95%). Dynamic imaging documented abnormal findings in 18 patients (90%). Fifteen patients (75%) had temporal lobe abnormalities on PET and SPECT (primarily in medial temporal regions); abnormal findings were bilateral in 10 patients (50%) and unilateral in 5 (25%). Six patients (30%) had frontal abnormalities, and 8 (40%) had nonfrontotemporal abnormalities. Correlations between neuropsychologic testing and dynamic imaging could be established but not consistently across the whole group. Patients with mild TBI and persistent postconcussive symptoms have a high incidence of temporal lobe injury (presumably involving the hippocampus and related structures), which may explain the frequent finding of memory disorders in this population. The abnormal temporal lobe findings on PET and SPECT in humans may be analogous to the neuropathologic evidence of medial temporal injury provided by animal studies after mild TBI. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
The fast and accurate 3D-face scanning technology based on laser triangle sensors
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin
2013-08-01
A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.
A fully actuated robotic assistant for MRI-guided prostate biopsy and brachytherapy
NASA Astrophysics Data System (ADS)
Li, Gang; Su, Hao; Shang, Weijian; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fischer, Gregory S.
2013-03-01
Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm.
NASA Technical Reports Server (NTRS)
Grantham, William D.; Person, Lee H., Jr.; Brown, Philip W.; Becker, Lawrence E.; Hunt, George E.; Rising, J. J.; Davis, W. J.; Willey, C. S.; Weaver, W. A.; Cokeley, R.
1985-01-01
Piloted simulation studies have been conducted to evaluate the effectiveness of two pitch active control systems (PACS) on the flying qualities of a wide-body transport airplane when operating at negative static margins. These two pitch active control systems consisted of a simple 'near-term' PACS and a more complex 'advanced' PACS. Eight different flight conditions, representing the entire flight envelope, were evaluated with emphasis on the cruise flight conditions. These studies were made utilizing the Langley Visual/Motion Simulator (VMS) which has six degrees of freedom. The simulation tests indicated that (1) the flying qualities of the baseline aircraft (PACS off) for the cruise and other high-speed flight conditions were unacceptable at center-of-gravity positions aft of the neutral static stability point; (2) within the linear static stability flight envelope, the near-term PACS provided acceptable flying qualities for static stabilty margins to -3 percent; and (3) with the advanced PACS operative, the flying qualities were demonstrated to be good (satisfactory to very acceptable) for static stabilty margins to -20 percent.
Borst, Gregoire; Niven, Elaine; Logie, Robert H
2012-04-01
Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.
Ribeiro, Ana Paula; Trombini-Souza, Francis; Tessutti, Vitor; Lima, Fernanda Rodrigues; de Camargo Neves Sacco, Isabel; João, Sílvia Maria Amado
2011-01-01
OBJECTIVE: To evaluate and compare rearfoot alignment and medial longitudinal arch index during static postures in runners, with and without symptoms and histories of plantar fasciitis (PF). INTRODUCTION: PF is the third most common injury in runners but, so far, its etiology remains unclear. In the literature, rearfoot misalignment and conformations of the longitudinal plantar arch have been described as risk factors for the development of PF. However, in most of the investigated literature, the results are still controversial, mainly regarding athletic individuals and the effects of pain associated with these injuries. METHODS: Forty-five runners with plantar fasciitis (30 symptomatic and 15 with previous histories of injuries) and 60 controls were evaluated. Pain was assessed by a visual analogue scale. The assessment of rearfoot alignment and the calculations of the arch index were performed by digital photographic images. RESULTS: There were observed similarities between the three groups regarding the misalignments of the rearfoot valgus. The medial longitudinal arches were more elevated in the group with symptoms and histories of PF, compared to the control runners. CONCLUSIONS: Runners with symptoms or histories of PF did not differ in rearfoot valgus misalignments, but showed increases in the longitudinal plantar arch during bipedal static stance, regardless of the presence of pain symptoms. PMID:21808870
Pastor, Géraldine; Jiménez-González, María; Plaza-García, Sandra; Beraza, Marta; Padro, Daniel; Ramos-Cabrer, Pedro; Reese, Torsten
2017-09-01
Differences in the cerebro-vasculature among strains as well as individual animals might explain variability in animal models and thus, a non-invasive method tailored to image cerebral vessel of interest with high signal to noise ratio is required. Experimentally, we describe a new general protocol of three-dimensional time-of-flight magnetic resonance angiography to visualize non-invasively the cerebral vasculature in 6 different rat strains. Flow compensated angiograms of Sprague Dawley, Wistar Kyoto, Lister Hooded, Long Evans, Fisher 344 and Spontaneous Hypertensive Rat strains were obtained without the use of contrast agents. At 11.7T using a repetition time of 60ms, an isotropic resolution of up to 62μm was achieved; total imaging time was 98min for a 3D data set. The visualization of the cerebral arteries was improved by removing extra-cranial vessels prior to the calculation of maximum intensity projection to obtain the angiograms. Ultimately, we demonstrate that the newly implemented method is also suitable to obtain angiograms following middle cerebral artery occlusion, despite the presence of intense vasogenic edema 24h after reperfusion. The careful selection of the excitation profile and repetition time at a higher static magnetic field allowed an increase in spatial resolution to reliably detect of the hypothalamic artery, the anterior choroidal artery as well as arterial branches of the peri-amygdoidal complex and the optical nerve in six different rat strains. MR angiography without contrast agent can be utilized to study cerebro-vascular abnormalities in various animal models. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
New insight in spiral drawing analysis methods - Application to action tremor quantification.
Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie
2017-10-01
Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
Daniel, Lorias Espinoza; Tapia, Fernando Montes; Arturo, Minor Martínez; Ricardo, Ordorica Flores
2014-12-01
The ability to handle and adapt to the visual perspectives generated by angled laparoscopes is crucial for skilled laparoscopic surgery. However, the control of the visual work space depends on the ability of the operator of the camera, who is often not the most experienced member of the surgical team. Here, we present a simple, low-cost option for surgical training that challenges the learner with static and dynamic visual perspectives at 30 degrees using a system that emulates the angled laparoscope. A system was developed using a low-cost camera and readily available materials to emulate the angled laparoscope. Nine participants undertook 3 tasks to test spatial adaptation to the static and dynamic visual perspectives at 30 degrees. Completing each task to a predefined satisfactory level ensured precision of execution of the tasks. Associated metrics (time and error rate) were recorded, and the performance of participants were determined. A total of 450 repetitions were performed by 9 residents at various stages of training. All the tasks were performed with a visual perspective of 30 degrees using the system. Junior residents were more proficient than senior residents. This system is a viable and low-cost alternative for developing the basic psychomotor skills necessary for the handling and adaptation to visual perspectives of 30 degrees, without depending on a laparoscopic tower, in junior residents. More advanced skills may then be acquired by other means, such as in the operating theater or through clinical experience.
NASA Astrophysics Data System (ADS)
Hu, Ran; Wan, Jiamin; Kim, Yongman; Tokunaga, Tetsu K.
2017-08-01
How the wettability of pore surfaces affects supercritical (sc) CO2 capillary trapping in geologic carbon sequestration (GCS) is not well understood, and available evidence appears inconsistent. Using a high-pressure micromodel-microscopy system with image analysis, we studied the impact of wettability on scCO2 capillary trapping during short-term brine flooding (80 s, 8-667 pore volumes). Experiments on brine displacing scCO2 were conducted at 8.5 MPa and 45°C in water-wet (static contact angle θ = 20° ± 8°) and intermediate-wet (θ = 94° ± 13°) homogeneous micromodels under four different flow rates (capillary number Ca ranging from 9 × 10-6 to 8 × 10-4) with a total of eight conditions (four replicates for each). Brine invasion processes were recorded and statistical analysis was performed for over 2000 images of scCO2 saturations, and scCO2 cluster characteristics. The trapped scCO2 saturation under intermediate-wet conditions is 15% higher than under water-wet conditions under the slowest flow rate (Ca ˜ 9 × 10-6). Based on the visualization and scCO2 cluster analysis, we show that the scCO2 trapping process in our micromodels is governed by bypass trapping that is enhanced by the larger contact angle. Smaller contact angles enhance cooperative pore filling and widen brine fingers (or channels), leading to smaller volumes of scCO2 being bypassed. Increased flow rates suppress this wettability effect.
NASA Astrophysics Data System (ADS)
Kokorian, Jaap; Merlijn van Spengen, W.
2017-11-01
In this paper we demonstrate a new method for analyzing and visualizing friction force measurements of meso-scale stick-slip motion, and introduce a method for extracting two separate dissipative energy components. Using a microelectromechanical system tribometer, we execute 2 million reciprocating sliding cycles, during which we measure the static friction force with a resolution of \
Glaucoma management in patients with osteo-odonto-keratoprosthesis (OOKP): the Singapore OOKP Study.
Kumar, Rajesh S; Tan, Donald T H; Por, Yong-Ming; Oen, Francis T; Hoh, Sek-Tien; Parthasarathy, Anand; Aung, Tin
2009-01-01
To report diagnostic modalities and treatment options for glaucoma in eyes with osteo-odonto keratoprosthesis (OOKP). Eyes that underwent OOKP were evaluated for glaucoma at the time of the first postoperative visit, then at 1 and 3 months after the procedure, and thereafter every 6 months. All eyes underwent stereo-biomicroscopic optic nerve head (ONH) assessment, kinetic (Goldmann perimetry) and automated static visual field testing, ONH photography, Heidelberg retina tomograph, scanning laser polarimetery (GDx), and optical coherence tomography. Treatment of glaucoma was also reviewed. Average follow-up period was 19.1 (range: 5 to 31) months. Of the 15 eyes that underwent OOKP, 5 eyes had preexisting glaucoma. None of the other 10 eyes developed glaucoma after OOKP. ONH photography and visual field testing were the most reliable methods to assess status of the disease, whereas Heidelberg retina tomograph and optical coherence tomography could be performed with reasonable reproducibility and quality; GDx imaging was poor. All patients with glaucoma were treated with oral acetazolamide 500 mg twice a day. Transscleral cyclophotocoagulation was performed in 3 eyes at stage 2 of OOKP surgery. Progression of glaucoma was noted in 2 eyes on the basis of optic disc photographs and automated perimetry. Visual field testing and optic disc assessment with optic disc photographs seem to be effective methods to monitor eyes with OOKP for glaucoma. Treatment strategies include oral medications to lower intraocular pressure and cyclophotocoagulation.
MPEG-4 AVC saliency map computation
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.
2014-02-01
A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2006-01-01
Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.
Monte-Carlo Geant4 numerical simulation of experiments at 247-MeV proton microscope
NASA Astrophysics Data System (ADS)
Kantsyrev, A. V.; Skoblyakov, A. V.; Bogdanov, A. V.; Golubev, A. A.; Shilkin, N. S.; Yuriev, D. S.; Mintsev, V. B.
2018-01-01
A radiographic facility for an investigation of fast dynamic processes with areal density of targets up to 5 g/cm2 is under development on the basis of high-current proton linear accelerator at the Institute for Nuclear Research (Troitsk, Russia). A virtual model of the proton microscope developed in a software toolkit Geant4 is presented in the article. Fullscale Monte-Carlo numerical simulation of static radiographic experiments at energy of a proton beam 247 MeV was performed. The results of simulation of proton radiography experiments with static model of shock-compressed xenon are presented. The results of visualization of copper and polymethyl methacrylate step wedges static targets also described.
Experimental data of the static behavior of reinforced concrete beams at room and low temperature
Mirzazadeh, M. Mehdi; Noël, Martin; Green, Mark F.
2016-01-01
This article provides data on the static behavior of reinforced concrete at room and low temperature including, strength, ductility, and crack widths of the reinforced concrete. The experimental data on the application of digital image correlation (DIC) or particle image velocimetry (PIV) in measuring crack widths and the accuracy and precision of DIC/PIV method with temperature variations when is used for measuring strains is provided as well. PMID:27158650
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Architectural Visualization of C/C++ Source Code for Program Comprehension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panas, T; Epperly, T W; Quinlan, D
2006-09-01
Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less
Vessel segmentation in 4D arterial spin labeling magnetic resonance angiography images of the brain
NASA Astrophysics Data System (ADS)
Phellan, Renzo; Lindner, Thomas; Falcão, Alexandre X.; Forkert, Nils D.
2017-03-01
4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient's magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.
NASA Technical Reports Server (NTRS)
Arena, A. S., Jr.; Nelson, R. C.
1989-01-01
An experimental investigation into the fluid mechanisms responsible for wing rock on a slender delta wing with 80 deg leading edge sweep has been conducted. Time history and flow visualization data are presented for a wide angle-of-attack range. The use of an air bearing spindle has allowed the motion of the wing to be free from bearing friction or mechanical hysteresis. A bistable static condition has been found in vortex breakdown at an angle of attack of 40 deg which causes an overshoot of the steady state rocking amplitude. Flow visualization experiments also reveal a difference in static and dynamic breakdown locations on the wing. A hysteresis loop in dynamic breakdown location similar to that seen on pitching delta wings was observed as the wing was undergoing the limit cycle oscillation.
Rowe, Fiona J.; Noonan, Carmel; Manuel, Melanie
2013-01-01
Aim. To compare semikinetic perimetry (SKP) on Octopus 900 perimetry to a peripheral static programme with Humphrey automated perimetry. Methods. Prospective cross-section study comparing Humphrey full field (FF) 120 two zone programme to a screening protocol for SKP on Octopus perimetry. Results were independently graded for presence/absence of field defect plus type and location of defect. Results. 64 patients (113 eyes) underwent dual perimetry assessment. Mean duration of assessment for SKP was 4.54 minutes ±0.18 and 6.17 ± 0.12 for FF120 (P = 0.0001). 80% of results were correctly matched for normal or abnormal visual fields using the I4e target versus FF120, and 73.5% were correctly matched using the I2e target versus FF120. When comparing Octopus results with combined I4e and I2e isopters to the FF120 result, a match for normal or abnormal fields was recorded in 87%. Conclusions. Humphrey perimetry test duration was generally longer than Octopus SKP. In the absence of kinetic perimetry, peripheral static suprathreshold programme options such as FF120 may be useful for detection of visual field defects. However, statokinetic dissociation may occur. Octopus SKP utilising both I4e and I2e targets provides detailed information of both the defect depth and size and may provide a more representative view of the actual visual field defect. PMID:24558605
NASA Technical Reports Server (NTRS)
Kohlman, Lee W.; Ruggeri, Charles R.; Roberts, Gary D.; Handschuh, Robert Frederick
2013-01-01
Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests
NASA Technical Reports Server (NTRS)
Kohlman, Lee; Ruggeri, Charles; Roberts, Gary; Handshuh, Robert
2013-01-01
Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests.
Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry
Sacco, I.C.N.; Picon, A.P.; Ribeiro, A.P.; Sartor, C.D.; Camargo-Junior, F.; Macedo, D.O.; Mori, E.T.T.; Monte, F.; Yamate, G.Y.; Neves, J.G.; Kondo, V.E.; Aliberti, S.
2012-01-01
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol. PMID:22911379
Effect of image resolution manipulation in rearfoot angle measurements obtained with photogrammetry.
Sacco, I C N; Picon, A P; Ribeiro, A P; Sartor, C D; Camargo-Junior, F; Macedo, D O; Mori, E T T; Monte, F; Yamate, G Y; Neves, J G; Kondo, V E; Aliberti, S
2012-09-01
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.
Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona
2016-01-01
Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Visual Communications and Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell
1987-07-01
This special issue of Optical Engineering is concerned with visual communications and image processing. The increase in communication of visual information over the past several decades has resulted in many new image processing and visual communication systems being put into service. The growth of this field has been rapid in both commercial and military applications. The objective of this special issue is to intermix advent technology in visual communications and image processing with ideas generated from industry, universities, and users through both invited and contributed papers. The 15 papers of this issue are organized into four different categories: image compression and transmission, image enhancement, image analysis and pattern recognition, and image processing in medical applications.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
McElrone, Andrew J; Choat, Brendan; Parkinson, Dilworth Y; MacDowell, Alastair A; Brodersen, Craig R
2013-04-05
High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause images to blur and methods to avoid these issues are described. These recent advances with HRCT provide promising new insights into plant vascular function.
McElrone, Andrew J.; Choat, Brendan; Parkinson, Dilworth Y.; MacDowell, Alastair A.; Brodersen, Craig R.
2013-01-01
High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause images to blur and methods to avoid these issues are described. These recent advances with HRCT provide promising new insights into plant vascular function. PMID:23609036
A moving observer in a three-dimensional world
2016-01-01
For many tasks such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based three-dimensional reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding three-dimensional coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer's perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for three-dimensional reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of three-dimensional vision, more so, I argue, than the case of a static binocular observer. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269608
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
Splitting a colon geometry with multiplanar clipping
NASA Astrophysics Data System (ADS)
Ahn, David K.; Vining, David J.; Ge, Yaorong; Stelts, David R.
1998-06-01
Virtual colonoscopy, a recent three-dimensional (3D) visualization technique, has provided radiologists with a unique diagnostic tool. Using this technique, a radiologist can examine the internal morphology of a patient's colon by navigating through a surface-rendered model that is constructed from helical computed tomography image data. Virtual colonoscopy can be used to detect early forms of colon cancer in a way that is less invasive and expensive compared to conventional endoscopy. However, the common approach of 'flying' through the colon lumen to visually search for polyps is tedious and time-consuming, especially when a radiologist loses his or her orientation within the colon. Furthermore, a radiologist's field of view is often limited by the 3D camera position located inside the colon lumen. We have developed a new technique, called multi-planar geometry clipping, that addresses these problems. Our algorithm divides a complex colon anatomy into several smaller segments, and then splits each of these segments in half for display on a static medium. Multi-planar geometry clipping eliminates virtual colonoscopy's dependence upon expensive, real-time graphics workstations by enabling radiologists to globally inspect the entire internal surface of the colon from a single viewpoint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Amy, E-mail: aw554@uowmail.edu.au; Metcalfe, Peter; Liney, Gary
2015-04-15
Purpose: Accurate geometry is required for radiotherapy treatment planning (RTP). When considering the use of magnetic resonance imaging (MRI) for RTP, geometric distortions observed in the acquired images should be considered. While scanner technology and vendor supplied correction algorithms provide some correction, large distortions are still present in images, even when considering considerably smaller scan lengths than those typically acquired with CT in conventional RTP. This study investigates MRI acquisition with a moving table compared with static scans for potential geometric benefits for RTP. Methods: A full field of view (FOV) phantom (diameter 500 mm; length 513 mm) was developedmore » for measuring geometric distortions in MR images over volumes pertinent to RTP. The phantom consisted of layers of refined plastic within which vitamin E capsules were inserted. The phantom was scanned on CT to provide the geometric gold standard and on MRI, with differences in capsule location determining the distortion. MRI images were acquired with two techniques. For the first method, standard static table acquisitions were considered. Both 2D and 3D acquisition techniques were investigated. With the second technique, images were acquired with a moving table. The same sequence was acquired with a static table and then with table speeds of 1.1 mm/s and 2 mm/s. All of the MR images acquired were registered to the CT dataset using a deformable B-spline registration with the resulting deformation fields providing the distortion information for each acquisition. Results: MR images acquired with the moving table enabled imaging of the whole phantom length while images acquired with a static table were only able to image 50%–70% of the phantom length of 513 mm. Maximum distortion values were reduced across a larger volume when imaging with a moving table. Increased table speed resulted in a larger contribution of distortion from gradient nonlinearities in the through-plane direction and an increased blurring of capsule images, resulting in an apparent capsule volume increase by up to 170% in extreme axial FOV regions. Blurring increased with table speed and in the central regions of the phantom, geometric distortion was less for static table acquisitions compared to a table speed of 2 mm/s over the same volume. Overall, the best geometric accuracy was achieved with a table speed of 1.1 mm/s. Conclusions: The phantom designed enables full FOV imaging for distortion assessment for the purposes of RTP. MRI acquisition with a moving table extends the imaging volume in the z direction with reduced distortions which could be useful particularly if considering MR-only planning. If utilizing MR images to provide additional soft tissue information to the planning CT, standard acquisition sequences over a smaller volume would avoid introducing additional blurring or distortions from the through-plane table movement.« less
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Adaptive optics for peripheral vision
NASA Astrophysics Data System (ADS)
Rosén, R.; Lundström, L.; Unsbo, P.
2012-07-01
Understanding peripheral optical errors and their impact on vision is important for various applications, e.g. research on myopia development and optical correction of patients with central visual field loss. In this study, we investigated whether correction of higher order aberrations with adaptive optics (AO) improve resolution beyond what is achieved with best peripheral refractive correction. A laboratory AO system was constructed for correcting peripheral aberrations. The peripheral low contrast grating resolution acuity in the 20° nasal visual field of the right eye was evaluated for 12 subjects using three types of correction: refractive correction of sphere and cylinder, static closed loop AO correction and continuous closed loop AO correction. Running AO in continuous closed loop improved acuity compared to refractive correction for most subjects (maximum benefit 0.15 logMAR). The visual improvement from aberration correction was highly correlated with the subject's initial amount of higher order aberrations (p = 0.001, R 2 = 0.72). There was, however, no acuity improvement from static AO correction. In conclusion, correction of peripheral higher order aberrations can improve low contrast resolution, provided refractive errors are corrected and the system runs in continuous closed loop.
Dynamic Object Representations in Infants with and without Fragile X Syndrome
Farzin, Faraz; Rivera, Susan M.
2009-01-01
Our visual world is dynamic in nature. The ability to encode, mentally represent, and track an object's identity as it moves across time and space is critical for integrating and maintaining a complete and coherent view of the world. Here we investigated dynamic object processing in typically developing (TD) infants and infants with fragile X syndrome (FXS), a single-gene disorder associated with deficits in dorsal stream functioning. We used the violation of expectation method to assess infants’ visual response to expected versus unexpected outcomes following a brief dynamic (dorsal stream) or static (ventral stream) occlusion event. Consistent with previous reports of deficits in dorsal stream-mediated functioning in individuals with this disorder, these results reveal that, compared to mental age-matched TD infants, infants with FXS could maintain the identity of static, but not dynamic, object information during occlusion. These findings are the first to experimentally evaluate visual object processing skills in infants with FXS, and further support the hypothesis of dorsal stream difficulties in infants with this developmental disorder. PMID:20224809
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
Goines, Jennifer B; Ishii, Lisa E; Dey, Jacob K; Phillis, Maria; Byrne, Patrick J; Boahene, Kofi D O; Ishii, Masaru
2016-09-01
The interaction between patient- and observer-perceived quality of life (QOL) and facial paralysis-related disability and the resulting effect of these interactions on social perception are incompletely understood. To measure the associations between observer-perceived disability and QOL and patient-perceived disability and QOL in patients with facial paralysis. This prospective study in an academic tertiary referral center included 84 naive observers who viewed static and dynamic images of faces with unilateral, House-Brackmann grades IV to VI facial paralysis (n = 16) and demographically matched images of nonparalyzed control individuals (n = 4). Data were collected from June 1 to August 1, 2014, and analyzed from August 2 to December 1, 2014. Observers rated the patient and control images in 6 clinically relevant domains. The patients self-reported their disability and QOL using validated tools, such as the Facial Clinimetric Evaluation Scale. Quality of life, severity of paralysis, and disability were measured on a 100-point visual analog scale. The 84 observers (59 women [70%] and 25 men [30%]) ranged in age from 20 to 68 years (mean [SD] age, 35.2 [11.9]). Structural equation modeling showed that for each 1-point decrease in a patient's Facial Clinimetric Evaluation Scale score, the patient's visual analog scale QOL improved by 0.36 (SE, 0.03; 95% CI, 0.31-0.42) points. Similarly, from an observer perspective, as the perceived disability (-0.29 [SE, 0.04; 95% CI, -0.36 to -0.22]) and severity (-0.21 [SE, 0.03; 95% CI, -0.28 to -0.14]) decreased, the perceived QOL improved. Furthermore, attractive faces were viewed as having better QOL (disability, severity, and attractiveness regression coefficients, -0.29 [SE, 0.04; 95% CI, -0.36 to -0.22], -0.21 [SE, 0.03; 95% CI, -0.28 to -0.14], and 0.32 [SE, 0.03; 95% CI, 0.26 to 0.39], respectively). An inverse association was found between a paralyzed patient's self-reported QOL rating and the observers' perceived QOL. This association was complex and was mediated through perceived severity and disability. Observers judged the severity of paralyzed faces to be 3.61 (SE, 1.80; 95% CI, 0.09-7.14) points more severe when viewing dynamic rather than static images. Observers were more likely to rate QOL lower owing to disability than were the patients with paralysis. This finding may be explained by previous literature reporting that disabled people adjust their values to accommodate their disability, thereby limiting the negative effect on their QOL. Given the importance of QOL on social interaction, the dissonance between observers and patients in this area has important implications for the socialization of patients with facial paralysis. NA.
Visual information mining in remote sensing image archives
NASA Astrophysics Data System (ADS)
Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.
2002-01-01
The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.
Song, Kyeongtak; Rhodes, Evan; Wikstrom, Erik A
2018-04-01
Visual, vestibular, and somatosensory systems contribute to postural control. Chronic ankle instability (CAI) patients have been observed to have a reduced ability to dynamically shift their reliance among sources of sensory information and rely more heavily on visual information during a single-limb stance relative to uninjured controls. Balance training is proven to improve postural control but there is a lack of evidence regarding the ability of balance training programs to alter the reliance on visual information in CAI patients. Our objective was to determine if balance training alters the reliance on visual information during static stance in CAI patients. The PubMed, CINAHL, and SPORTDiscus databases were searched from their earliest available date to October 2017 using a combination of keywords. Study inclusion criteria consisted of (1) using participants with CAI; (2) use of a balance training intervention; and (3) calculation of an objective measure of static postural control during single-limb stance with eyes open and eyes closed. Sample sizes, means, and standard deviations of single-leg balance measures for eyes-open and eyes-closed testing conditions before and after balance training were extracted from the included studies. Eyes-open to eyes-closed effect sizes [Hedges' g and 95% confidence intervals (CI)] before and after balance training were calculated, and between-study variability for heterogeneity and potential risks of publication bias were examined. Six studies were identified. The overall eyes-open to eyes-closed effect size difference between pre- and post-intervention assessments was not significant (Hedges' g effect size = 0.151, 95% CI = - 0.151 to 0.453, p = 0.26). This result indicates that the utilization of visual information in individuals with CAI during the single-leg balance is not altered after balance training. Low heterogeneity (Q(5) = 2.96, p = 0.71, I 2 = 0%) of the included studies and no publication bias were found. On the basis of our systematic review with meta-analysis, it appears that traditional balance training protocols do not alter the reliance on visual information used by CAI patients during a single-leg stance.
PedVizApi: a Java API for the interactive, visual analysis of extended pedigrees.
Fuchsberger, Christian; Falchi, Mario; Forer, Lukas; Pramstaller, Peter P
2008-01-15
PedVizApi is a Java API (application program interface) for the visual analysis of large and complex pedigrees. It provides all the necessary functionality for the interactive exploration of extended genealogies. While available packages are mostly focused on a static representation or cannot be added to an existing application, PedVizApi is a highly flexible open source library for the efficient construction of visual-based applications for the analysis of family data. An extensive demo application and a R interface is provided. http://www.pedvizapi.org
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
Putting the slab back: First steps of creating a synthetic seismic section of subducted lithosphere
NASA Astrophysics Data System (ADS)
Zertani, S.; John, T.; Tilmann, F. J.; Leiss, B.; Labrousse, L.; Andersen, T. B.
2016-12-01
Imaging subducted lithosphere is a difficult task which is usually tackled with geophysical methods. To date, the most promising method is receiver function imaging (RF), which concentrates on first order conversions from p- to s-waves at boundaries (e.g. lithological and structural) with contrasting seismic velocities. The resolution is high for the upper parts of the subducting material. However, in greater depths (40-80 km) the visualization of the subducted slab becomes increasingly blurry, until the slab cannot be distinguished from Earth's mantle anymore, rendering a visualization impossible. This blurry zone is thought to occur due to advancing eclogitization of the subducting slab. However, it is not well understood how micro- to macro-scale structures related to progressive eclogitization affect RF signals. The island of Holsnoy in the Bergen Arcs of western Norway represents a partially eclogitized formerly subducted block of lower crust and serves as an analogue to the aforementioned blurry zone in RF images. This eclogitization can be observed in static fluid induced eclogitization patches or fingers, but is mainly present in localized shear zones of variable sizes (mm to 100s of meters). We mapped the area to gain a better understanding of the geometries of such shear zones, which could possibly function as seismic reflectors. Further, we calculated seismic velocities from thermodynamic modelling on the basis of XRF whole rock analysis and compared these results to velocities calculated from a combination of thin section information, EMPA and physical mineral properties (Voigt-Reuss-Hill averaging). Both methods yield consistent results for p- and s-wave velocities of eclogites and granulites from Holsnoy. In combination with X-ray measurements to identify the microtextures of the characteristic samples to incorporate seismic anisotropy caused by e.g. foliation or lineation, these seismic velocities are used as an input for seismic models to reconstruct the progressive eclogitization of a subducting slab as seen in many RF-images (i.e. blurry zone).
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
MUSIC-type imaging of a thin penetrable inclusion from its multi-static response matrix
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Lesselier, Dominique
2009-07-01
The imaging of a thin inclusion, with dielectric and/or magnetic contrasts with respect to the embedding homogeneous medium, is investigated. A MUSIC-type algorithm operating at a single time-harmonic frequency is developed in order to map the inclusion (that is, to retrieve its supporting curve) from scattered field data collected within the multi-static response matrix. Numerical experiments carried out for several types of inclusions (dielectric and/or magnetic ones, straight or curved ones), mostly single inclusions and also two of them close by as a straightforward extension, illustrate the pros and cons of the proposed imaging method.