Current Status Of Ergonomic Standards
NASA Astrophysics Data System (ADS)
Lynch, Gene
1984-05-01
The last five years have seen the development and adoption of new kinds of standards for the display industry. This standardization activity deals with the complex human computer interface. Here the concerns involve health, safety, productivity, and operator well-being. The standards attempt to specify the "proper" use of visual display units. There is a wide range of implications for the display industry - as manufacturers of displays, as employers, and as users of visual display units. In this paper we examine the development of these standards, their impact on the display industry and implications for the future.
Mountain Plains Learning Experience Guide: Marketing. Course: Visual Merchandising.
ERIC Educational Resources Information Center
Preston, T.; Egan, B.
One of thirteen individualized courses included in a marketing curriculum, this course covers the steps to be followed in planning, constructing, and evaluating the effectiveness of merchandise displays. The course is comprised of one unit, General Merchandise Displays. The unit begins with a Unit Learning Experience Guide that gives directions…
Direction discriminating hearing aid system
NASA Technical Reports Server (NTRS)
Jhabvala, M.; Lin, H. C.; Ward, G.
1991-01-01
A visual display was developed for people with substantial hearing loss in either one or both ears. The system consists of three discreet units; an eyeglass assembly for the visual display of the origin or direction of sounds; a stationary general purpose noise alarm; and a noise seeker wand.
Improving School Lighting for Video Display Units.
ERIC Educational Resources Information Center
Parker-Jenkins, Marie; Parker-Jenkins, William
1985-01-01
Provides information to identify and implement the key characteristics which contribute to an efficient and comfortable visual display unit (VDU) lighting installation. Areas addressed include VDU lighting requirements, glare, lighting controls, VDU environment, lighting retrofit, optical filters, and lighting recommendations. A checklist to…
Prevention: lessons from video display installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margach, C.B.
1983-04-01
Workers interacting with video display units for periods in excess of two hours per day report significantly increased visual discomfort, fatigue and inefficiencies, as compared with workers performing similar tasks, but without the video viewing component. Difficulties in focusing and the appearance of myopia are among the problems being described. With a view to preventing or minimizing such problems, principles and procedures are presented providing for (a) modification of physical features of the video workstation and (b) improvement in the visual performances of the individual video unit operator.
Large Terrain Continuous Level of Detail 3D Visualization Tool
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan
2012-01-01
This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Visualization of information with an established order
Wong, Pak Chung [Richland, WA; Foote, Harlan P [Richmond, WA; Thomas, James J [Richland, WA; Wong, Kwong-Kwok [Sugar Land, TX
2007-02-13
Among the embodiments of the present invention is a system including one or more processors operable to access data representative of a biopolymer sequence of monomer units. The one or more processors are further operable to establish a pattern corresponding to at least one fractal curve and generate one or more output signals corresponding to a number of image elements each representative of one of the monomer units. Also included is a display device responsive to the one or more output signals to visualize the biopolymer sequence by displaying the image elements in accordance with the pattern.
A novel shape-changing haptic table-top display
NASA Astrophysics Data System (ADS)
Wang, Jiabin; Zhao, Lu; Liu, Yue; Wang, Yongtian; Cai, Yi
2018-01-01
A shape-changing table-top display with haptic feedback allows its users to perceive 3D visual and texture displays interactively. Since few existing devices are developed as accurate displays with regulatory haptic feedback, a novel attentive and immersive shape changing mechanical interface (SCMI) consisting of image processing unit and transformation unit was proposed in this paper. In order to support a precise 3D table-top display with an offset of less than 2 mm, a custommade mechanism was developed to form precise surface and regulate the feedback force. The proposed image processing unit was capable of extracting texture data from 2D picture for rendering shape-changing surface and realizing 3D modeling. The preliminary evaluation result proved the feasibility of the proposed system.
2009-12-01
forward-looking infrared FOV field-of-view HDU helmet display unit HMD helmet-mounted display IHADSS Integrated Helmet and Display...monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display ( HMD ) in the British Army’s Apache AH Mk 1 attack helicopter has any...Integrated Helmet and Display Sighting System, IHADSS, Helmet-mounted display, HMD , Apache helicopter, Visual performance UNCLAS UNCLAS UNCLAS SAR 96
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1988-01-01
Balance dynamic display unit (BDDU) is compact system conditioning six dynamic analog signals so they are monitored simultaneously in real time on single-trace oscilloscope. Typical BDDU oscilloscope display in scan mode shows each channel occupying one-sixth of total trace. System features two display modes usable with conventional, single-channel oscilloscope: multiplexed six-channel "bar-graph" format and single-channel display. Two-stage visual and audible limit alarm provided for each channel.
Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne
2008-01-01
This study investigated the effect of moving from single-occupancy offices to a landscape environment. Thirty-two visual display unit (VDU) operators reported no significant change in visual discomfort. Lighting conditions and glare reported subjectively showed no significant correlation with visual discomfort. Experience of pain was found to reduce subjectively rated work capacity during VDU tasks. The correlation between visual discomfort and reduced work capacity for single-occupancy offices was rs=.88 (p=.000) and for office landscape rs=.82 (p=.000). Eye blink rate during habitual VDU work was recorded for 12 operators randomly selected from the 32 participants in the office landscape. A marked drop in eye blink rate during VDU work was found compared to eye blink rate during easy conversation. There were no significant changes in pain intensity in the neck, shoulder, forearm, wrist/hand, back or headache (.24
What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.
van Buren, Benjamin; Gao, Tao; Scholl, Brian J
2017-10-01
One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.
Chen, Yue; Gao, Qin; Song, Fei; Li, Zhizhong; Wang, Yufan
2017-08-01
In the main control rooms of nuclear power plants, operators frequently have to switch between procedure displays and system information displays. In this study, we proposed an operation-unit-based integrated design, which combines the two displays to facilitate the synthesis of information. We grouped actions that complete a single goal into operation units and showed these operation units on the displays of system states. In addition, we used different levels of visual salience to highlight the current unit and provided a list of execution history records. A laboratory experiment, with 42 students performing a simulated procedure to deal with unexpected high pressuriser level, was conducted to compare this design against an action-based integrated design and the existing separated-displays design. The results indicate that our operation-unit-based integrated design yields the best performance in terms of time and completion rate and helped more participants to detect unexpected system failures. Practitioner Summary: In current nuclear control rooms, operators frequently have to switch between procedure and system information displays. We developed an integrated design that incorporates procedure information into system displays. A laboratory study showed that the proposed design significantly improved participants' performance and increased the probability of detecting unexpected system failures.
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Tactical Mission Command (TMC)
2016-03-01
capabilities to Army commanders and their staffs, consisting primarily of a user-customizable Common Operating Picture ( COP ) enabled with real-time... COP viewer and data management capability. It is a collaborative, visualization and planning application that also provides a common map display... COP ): Display the COP consisting of the following:1 Friendly forces determined by the commander including subordinate and supporting units at
Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne
2011-11-01
This study investigated the effect of moving from small offices to a landscape environment for 19 Visual Display Unit (VDU) operators at Alcatel Denmark AS. The operators reported significantly improved lighting condition and glare situation. Further, visual discomfort was also significantly reduced on a Visual Analogue Scale (VAS). There was no significant correlation between lighting condition and visual discomfort neither in the small offices nor in the office landscape. However, visual discomfort correlated significantly with glare in small offices i.e. more glare is related to more visual discomfort. This correlation disappeared after the lighting system in the office landscape had been improved. There was also a significant correlation between glare and itching of the eyes as well as blurred vision in the small offices, i.e. more glare more visual symptoms. Experience of pain was found to reduce the subjective assessment of work capacity during VDU tasks. There was a significant correlation between visual discomfort and reduced work capacity in small offices and in the office landscape. When moving from the small offices to the office landscape, there was a significant reduction in headache as well as back pain. No significant changes in pain intensity in the neck, shoulder, forearm, and wrist/hand were observed. The pain levels in different body areas were significantly correlated with subjective assessment of reduced work capacity in small offices and in the office landscape. By careful design and construction of an office landscape with regard to lighting and visual conditions, transfer from small offices may be acceptable from a visual-ergonomic point of view. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Virtual Reality Used to Serve the Glenn Engineering Community
NASA Technical Reports Server (NTRS)
Carney, Dorothy V.
2001-01-01
There are a variety of innovative new visualization tools available to scientists and engineers for the display and analysis of their models. At the NASA Glenn Research Center, we have an ImmersaDesk, a large, single-panel, semi-immersive display device. This versatile unit can interactively display three-dimensional images in visual stereo. Our challenge is to make this virtual reality platform accessible and useful to researchers. An example of a successful application of this computer technology is the display of blade out simulations. NASA Glenn structural dynamicists, Dr. Kelly Carney and Dr. Charles Lawrence, funded by the Ultra Safe Propulsion Project under Base R&T, are researching blade outs, when turbine engines lose a fan blade during operation. Key objectives of this research include minimizing danger to the aircraft via effective blade containment, predicting destructive loads due to the imbalance following a blade loss, and identifying safe, cost-effective designs and materials for future engines.
Human factors of intelligent computer aided display design
NASA Technical Reports Server (NTRS)
Hunt, R. M.
1985-01-01
Design concepts for a decision support system being studied at NASA Langley as an aid to visual display unit (VDU) designers are described. Ideally, human factors should be taken into account by VDU designers. In reality, although the human factors database on VDUs is small, such systems must be constantly developed. Human factors are therefore a secondary consideration. An expert system will thus serve mainly in an advisory capacity. Functions can include facilitating the design process by shortening the time to generate and alter drawings, enhancing the capability of breaking design requirements down into simpler functions, and providing visual displays equivalent to the final product. The VDU system could also discriminate, and display the difference, between designer decisions and machine inferences. The system could also aid in analyzing the effects of designer choices on future options and in ennunciating when there are data available on a design selections.
Study of a direct visualization display tool for space applications
NASA Astrophysics Data System (ADS)
Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.
2017-11-01
The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.
An evaluation of the ELT-8 hematology analyzer.
Raik, E; McPherson, J; Barton, L; Hewitt, B S; Powell, E G; Gordon, S
1982-04-01
The TMELT-8 Hematology Analyzer is a fully automated cell counter which utilizes laser light scattering and hydrodynamic focusing to provide an 8 parameter whole blood count. The instrument consists of a sample handler with ticket printer, and a data handler with visual display unit, It accepts 100 microliter samples of venous or capillary blood and prints the values for WCC, RCC, Hb, Hct, MCV, MCH, MCHC and platelet count on to a standard result card. All operational and quality control functions, including graphic display of relative cell size distribution, can be obtained from the visual display unit and can also be printed as a permanent record if required. In a limited evaluation of the ELT-8, precision, linearity, accuracy, lack of sample carry-over and user acceptance were excellent. Reproducible values were obtained for all parameters after overnight storage of samples. Reagent usage and running costs were lower than for the Coulter S and the Coulter S Plus. The ease of processing capillary samples was considered to be a major advantage. The histograms served to alert the operator to a number of abnormalities, some of which were clinically significant.
NASA Astrophysics Data System (ADS)
Yang, Le; Sang, Xinzhu; Yu, Xunbo; Liu, Boyang; Liu, Li; Yang, Shenwu; Yan, Binbin; Du, Jingyan; Gao, Chao
2018-05-01
A 54-inch horizontal-parallax only light-field display based on the light-emitting diode (LED) panel and the micro-pinhole unit array (MPUA) is demonstrated. Normally, the perceived 3D effect of the three-dimensional (3D) display with smooth motion parallax and abundant light-field information can be enhanced with increasing the density of viewpoints. However, the density of viewpoints is inversely proportional to the spatial display resolution for the conventional integral imaging. Here, a special MPUA is designed and fabricated, and the displayed 3D scene constructed by the proposed horizontal light-field display is presented. Compared with the conventional integral imaging, both the density of horizontal viewpoints and the spatial display resolution are significantly improved. In the experiment, A 54-inch horizontal light-field display with 42.8° viewing angle based on the LED panel with the resolution of 1280 × 720 and the MPUA is realized, which can provide natural 3D visual effect to observers with high quality.
Nishimura, Akio; Yokosawa, Kazuhiko
2012-01-01
Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.
[Review of visual display system in flight simulator].
Xie, Guang-hui; Wei, Shao-ning
2003-06-01
Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.
3D display considerations for rugged airborne environments
NASA Astrophysics Data System (ADS)
Barnidge, Tracy J.; Tchon, Joseph L.
2015-05-01
The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.
NASA Astrophysics Data System (ADS)
Gomes, Gary G.
1986-05-01
A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.
Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
2016-01-01
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Secondary visual workload capability with primary visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.
1978-01-01
Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.
A comparison of visual and kinesthetic-tactual displays for compensatory tracking
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.
1983-01-01
Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.
Monocular display unit for 3D display with correct depth perception
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Hosomi, Takashi
2009-11-01
A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.
McIDAS-V: Advanced Visualization for 3D Remote Sensing Data
NASA Astrophysics Data System (ADS)
Rink, T.; Achtor, T. H.
2010-12-01
McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.
Auditory, visual, and bimodal data link displays and how they support pilot performance.
Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S
2013-06-01
The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.
Modeling and Simulation: A Rationale for Implementing New Training Technologies.
ERIC Educational Resources Information Center
Mattoon, Joseph S.
1996-01-01
Describes and advocates various technologies which have modernized the Air Force's Specialized Undergraduate Pilot Training (SUPT). After outlining some theoretical background, the article provides details on student proficiency profiles, the portable electronic trainer, the unit training device, specialized visual and acoustic displays, and the…
A comparison of tracking with visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.
1981-01-01
Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.
Laser Optometric Assessment Of Visual Display Viewability
NASA Astrophysics Data System (ADS)
Murch, Gerald M.
1983-08-01
Through the technique of laser optometry, measurements of a display user's visual accommodation and binocular convergence were used to assess the visual impact of display color, technology, contrast, and work time. The studies reported here indicate the potential of visual-function measurements as an objective means of improving the design of visual displays.
Technical note: real-time web-based wireless visual guidance system for radiotherapy.
Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho
2017-06-01
Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.
NASA Technical Reports Server (NTRS)
1990-01-01
SPATE 900 Dynamic Stress Analyzer is an acronym for Stress Pattern Analysis by Thermal Emission. It detects stress-induced temperature changes in a structure and indicates the degree of stress. Ometron, Inc.'s SPATE 9000 consists of a scan unit and a data display. The scan unit contains an infrared channel focused on the test structure to collect thermal radiation, and a visual channel used to set up the scan area and interrogate the stress display. Stress data is produced by detecting minute temperature changes, down to one-thousandth of a degree Centigrade, resulting from the application to the structure of dynamic loading. The electronic data processing system correlates the temperature changes with a reference signal to determine stress level.
3D Visualization of an Invariant Display Strategy for Hyperspectral Imagery
2002-12-01
to Remote Sensing, New York, New York: Guillford Press, March 2002. Deitel , H. M., Deitel , P. J., Nieto, T. R. and Lin, T. M., XML How to Program ...Principal Component Analysis (PCA) to rotate the data into a coordinate space, which can be used to display the data. This thesis demonstrates how to ...radiation band is the natural unit of data organization, the BSQ format is also easy to implement. Figure 2.5 shows how a scene originally sensed
NASA Astrophysics Data System (ADS)
Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.
2005-03-01
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
The cognitive science of visual-spatial displays: implications for design.
Hegarty, Mary
2011-07-01
This paper reviews cognitive science perspectives on the design of visual-spatial displays and introduces the other papers in this topic. It begins by classifying different types of visual-spatial displays, followed by a discussion of ways in which visual-spatial displays augment cognition and an overview of the perceptual and cognitive processes involved in using displays. The paper then argues for the importance of cognitive science methods to the design of visual displays and reviews some of the main principles of display design that have emerged from these approaches to date. Cognitive scientists have had good success in characterizing the performance of well-defined tasks with relatively simple visual displays, but many challenges remain in understanding the use of complex displays for ill-defined tasks. Current research exemplified by the papers in this topic extends empirical approaches to new displays and domains, informs the development of general principles of graphic design, and addresses current challenges in display design raised by the recent explosion in availability of complex data sets and new technologies for visualizing and interacting with these data. Copyright © 2011 Cognitive Science Society, Inc.
Technique for improving solid state mosaic images
NASA Technical Reports Server (NTRS)
Saboe, J. M.
1969-01-01
Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.
Automatic speech recognition in air-ground data link
NASA Technical Reports Server (NTRS)
Armstrong, Herbert B.
1989-01-01
In the present air traffic system, information presented to the transport aircraft cockpit crew may originate from a variety of sources and may be presented to the crew in visual or aural form, either through cockpit instrument displays or, most often, through voice communication. Voice radio communications are the most error prone method for air-ground data link. Voice messages can be misstated or misunderstood and radio frequency congestion can delay or obscure important messages. To prevent proliferation, a multiplexed data link display can be designed to present information from multiple data link sources on a shared cockpit display unit (CDU) or multi-function display (MFD) or some future combination of flight management and data link information. An aural data link which incorporates an automatic speech recognition (ASR) system for crew response offers several advantages over visual displays. The possibility of applying ASR to the air-ground data link was investigated. The first step was to review current efforts in ASR applications in the cockpit and in air traffic control and evaluated their possible data line application. Next, a series of preliminary research questions is to be developed for possible future collaboration.
Hofer, Jeffrey D; Rauk, Adam P
2017-02-01
The purpose of this work was to develop a straightforward and robust approach to analyze and summarize the ability of content uniformity data to meet different criteria. A robust Bayesian statistical analysis methodology is presented which provides a concise and easily interpretable visual summary of the content uniformity analysis results. The visualization displays individual batch analysis results and shows whether there is high confidence that different content uniformity criteria could be met a high percentage of the time in the future. The 3 tests assessed are as follows: (a) United States Pharmacopeia Uniformity of Dosage Units <905>, (b) a specific ASTM E2810 Sampling Plan 1 criterion to potentially be used for routine release testing, and (c) another specific ASTM E2810 Sampling Plan 2 criterion to potentially be used for process validation. The approach shown here could readily be used to create similar result summaries for other potential criteria. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Pairwise comparisons and visual perceptions of equal area polygons.
Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R
2009-02-01
The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.
The behavioral context of visual displays in common marmosets (Callithrix jacchus).
de Boer, Raïssa A; Overduin-de Vries, Anne M; Louwerse, Annet L; Sterck, Elisabeth H M
2013-11-01
Communication is important in social species, and may occur with the use of visual, olfactory or auditory signals. However, visual communication may be hampered in species that are arboreal have elaborate facial coloring and live in small groups. The common marmoset fits these criteria and may have limited visual communication. Nonetheless, some (contradictive) propositions concerning visual displays in the common marmoset have been made, yet quantitative data are lacking. The aim of this study was to assign a behavioral context to different visual displays using pre-post-event-analyses. Focal observations were conducted on 16 captive adult and sub-adult marmosets in three different family groups. Based on behavioral elements with an unambiguous meaning, four different behavioral contexts were distinguished: aggression, fear, affiliation, and play behavior. Visual displays concerned behavior that included facial expressions, body postures, and pilo-erection of the fur. Visual displays related to aggression, fear, and play/affiliation were consistent with the literature. We propose that the visual display "pilo-erection tip of tail" is related to fear. Individuals receiving these fear signals showed a higher rate of affiliative behavior. This study indicates that several visual displays may provide cues or signals of particular social contexts. Since the three displays of fear elicited an affiliative response, they may communicate a request of anxiety reduction or signal an external referent. Concluding, common marmosets, despite being arboreal and living in small groups, use several visual displays to communicate with conspecifics and their facial coloration may not hamper, but actually promote the visibility of visual displays. © 2013 Wiley Periodicals, Inc.
Collaborative visual analytics of radio surveys in the Big Data era
NASA Astrophysics Data System (ADS)
Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.
2017-06-01
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.
Ellingson, Roger M; Oken, Barry
2010-01-01
Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.
Exploring the Boundary Conditions of the Redundancy Principle
ERIC Educational Resources Information Center
McCrudden, Matthew T.; Hushman, Carolyn J.; Marley, Scott C.
2014-01-01
This experiment investigated whether study of a scientific text and a visual display that contained redundant text segments would affect memory and transfer. The authors randomly assigned 42 students from a university in the southwestern United States in equal numbers to 1 of 2 conditions: (a) a redundant condition, in which participants studied a…
NASA Technical Reports Server (NTRS)
1972-01-01
The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.
Krapp, M; Ludwig, A; Axt-Fliedner, R; Kreiselmaier, P
2011-08-01
The objective of this study was to evaluate which cardiac planes and malformations can be visualized by first trimester fetal echocardiography during the daily routine in a prenatal medicine unit. From October 2007 to June 2009, all fetuses with a crown rump length between 45 and 84 mm were included in the study. The fetal echocardiographies were carried out by one examiner. The entire examination including fetal echocardiography was completed within a time interval of 30 minutes. When possible, the abdominal plane, 4-chamber view (CV), pulmonary veins, left ventricular outflow tract, 3-vessel view (3-VV) and the aortic arch were visualized by color Doppler and/or power Doppler sonography. 690 fetuses were enrolled in the retrospective study. The abdominal plane, 4-CV, pulmonary veins, left ventricular outflow tract, 3-VV and the aortic arch were visualized in 99 %, 96 %, 23 %, 97 %, 98 % and 72 % of cases, respectively. During the study interval, 17 cardiac malformations were diagnosed. Outcome data were obtained in 92 % of the normal fetuses. 5 cardiac anomalies were diagnosed beyond the first trimester. The standard planes of fetal echocardiography can be displayed in the first trimester in the clinical routine. Pulmonary veins can be visualized in almost a quarter of the cases. First trimester congenital heart diseases are strongly associated with chromosomal abnormalities during the first trimester. © Georg Thieme Verlag KG Stuttgart · New York.
Abdolell, Mohamed; Tsuruda, Kaitlyn; Lightfoot, Christopher B; Barkova, Eva; McQuaid, Melanie; Caines, Judy; Iles, Sian E
2016-01-01
Discussions of percent breast density (PD) and breast cancer risk implicitly assume that visual assessments of PD are comparable between vendors despite differences in technology and display algorithms. This study examines the extent to which visual assessments of PD differ between mammograms acquired from two vendors. Pairs of "for presentation" digital mammography images were obtained from two mammography units for 146 women who had a screening mammogram on one vendor unit followed by a diagnostic mammogram on a different vendor unit. Four radiologists independently visually assessed PD from single left mediolateral oblique view images from the two vendors. Analysis of variance, intra-class correlation coefficients (ICC), scatter plots, and Bland-Altman plots were used to evaluate PD assessments between vendors. The mean radiologist PD for each image was used as a consensus PD measure. Overall agreement of the PD assessments was excellent between the two vendors with an ICC of 0.95 (95% confidence interval: 0.93 to 0.97). Bland-Altman plots demonstrated narrow upper and lower limits of agreement between the vendors with only a small bias (2.3 percentage points). The results of this study support the assumption that visual assessment of PD is consistent across mammography vendors despite vendor-specific appearances of "for presentation" images.
Ang, Cheah Kiok; Mohidin, Norhani; Chung, Kah Meng
2014-09-01
Wink glass (WG), an invention to stimulate blinking at interval of 5 s was designed to reduce dry eye symptoms during visual display unit (VDU) use. The objective of this study is to investigate the effect of WG on visual functions that include blink rate, ocular surface symptoms (OSS) and tear stability during VDU use. A total of 26 young and asymptomatic subjects were instructed to read articles in Malay language with a computer for 20 min with WG whereby their blink rate, pre- and post-task tear break-up time, and OSS were recorded. The results were compared to another reading session of the subjects wearing a transparent plastic sheet as a control. Non-invasive tear break-up time was reduced after reading session with transparent plastic sheet (pre-task = 5.97 s, post-task = 5.14 s, z = -2.426, p = 0.015, Wilcoxon), but remained stable (pre-task = 5.62 s, post-task = 5.35 s, z = -0.67, p = 0.501) during the reading session with WG. The blink rate recorded during reading session with plastic sheet was 9 blinks/min (median) and this increased to 15 blinks/min (z = -3.315, p = 0.001) with WG. The reading task caused OSS (maximum scores = 20) with median score of 1 (0-8) reduced to median score of 0 (0-3) after wearing WG (z = -2.417, p = 0.016). WG was found to increase post-task tear stability, increased blinking rate and reduced OSS during video display unit use among young and healthy adults. Although it may be considered as an option to improve dry eye symptoms among VDU users, further studies are warranted to establish its stability and its effect on subjects with dry eyes.
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Visual display angles of conventional and a remotely piloted aircraft.
Kamine, Tovy Haber; Bendrick, Gregg A
2009-04-01
Instrument display separation and proximity are important human factor elements used in the design and grouping of aircraft instrument displays. To assess display proximity in practical operations, the viewing visual angles of various displays in several conventional aircraft and in a remotely piloted vehicle were assessed. The horizontal and vertical instrument display visual angles from the pilot's eye position were measured in 12 different types of conventional aircraft, and in the ground control station (GCS) of a remotely piloted aircraft (RPA). A total of 18 categories of instrument display were measured and compared. In conventional aircraft almost all of the vertical and horizontal visual display angles lay within a "cone of easy eye movement" (CEEM). Mission-critical instruments particular to specific aircraft types sometimes displaced less important instruments outside the CEEM. For the RPA, all horizontal visual angles lay within the CEEM, but most vertical visual angles lay outside this cone. Most instrument displays in conventional aircraft were consistent with display proximity principles, but several RPA displays lay outside the CEEM in the vertical plane. Awareness of this fact by RPA operators may be helpful in minimizing information access cost, and in optimizing RPA operations.
Guiding Principles for a Pediatric Neurology ICU (neuroPICU) Bedside Multimodal Monitor
Eldar, Yonina C.; Gopher, Daniel; Gottlieb, Amihai; Lammfromm, Rotem; Mangat, Halinder S; Peleg, Nimrod; Pon, Steven; Rozenberg, Igal; Schiff, Nicholas D; Stark, David E; Yan, Peter; Pratt, Hillel; Kosofsky, Barry E
2016-01-01
Summary Background Physicians caring for children with serious acute neurologic disease must process overwhelming amounts of physiological and medical information. Strategies to optimize real time display of this information are understudied. Objectives Our goal was to engage clinical and engineering experts to develop guiding principles for creating a pediatric neurology intensive care unit (neuroPICU) monitor that integrates and displays data from multiple sources in an intuitive and informative manner. Methods To accomplish this goal, an international group of physicians and engineers communicated regularly for one year. We integrated findings from clinical observations, interviews, a survey, signal processing, and visualization exercises to develop a concept for a neuroPICU display. Results Key conclusions from our efforts include: (1) A neuroPICU display should support (a) rapid review of retrospective time series (i.e. cardiac, pulmonary, and neurologic physiology data), (b) rapidly modifiable formats for viewing that data according to the specialty of the reviewer, and (c) communication of the degree of risk of clinical decline. (2) Specialized visualizations of physiologic parameters can highlight abnormalities in multivariable temporal data. Examples include 3-D stacked spider plots and color coded time series plots. (3) Visual summaries of EEG with spectral tools (i.e. hemispheric asymmetry and median power) can highlight seizures via patient-specific “fingerprints.” (4) Intuitive displays should emphasize subsets of physiology and processed EEG data to provide a rapid gestalt of the current status and medical stability of a patient. Conclusions A well-designed neuroPICU display must present multiple datasets in dynamic, flexible, and informative views to accommodate clinicians from multiple disciplines in a variety of clinical scenarios. PMID:27437048
Zikmund-Fisher, Brian J; Scherer, Aaron M; Witteman, Holly O; Solomon, Jacob B; Exe, Nicole L; Fagerlin, Angela
2018-03-26
Patient-facing displays of laboratory test results typically provide patients with one reference point (the "standard range"). To test the effect of including an additional harm anchor reference point in visual displays of laboratory test results, which indicates how far outside of the standard range values would need to be in order to suggest substantial patient risk. Using a demographically diverse, online sample, we compared the reactions of 1618 adults in the United States who viewed visual line displays that included both standard range and harm anchor reference points ("Many doctors are not concerned until here") to displays that included either (1) only a standard range, (2) standard range plus evaluative categories (eg, "borderline high"), or (3) a color gradient showing degree of deviation from the standard range. Providing the harm anchor reference point significantly reduced perceived urgency of close-to-normal alanine aminotransferase and creatinine results (P values <.001) but not generally for platelet count results. Notably, display type did not significantly alter perceptions of more extreme results in potentially harmful ranges. Harm anchors also substantially reduced the number of participants who wanted to contact their doctor urgently or go to the hospital about these test results. Presenting patients with evaluative cues regarding when test results become clinically concerning can reduce the perceived urgency of out-of-range results that do not require immediate clinical action. ©Brian J Zikmund-Fisher, Aaron M Scherer, Holly O Witteman, Jacob B Solomon, Nicole L Exe, Angela Fagerlin. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.03.2018.
Grinspan, Zachary M; Eldar, Yonina C; Gopher, Daniel; Gottlieb, Amihai; Lammfromm, Rotem; Mangat, Halinder S; Peleg, Nimrod; Pon, Steven; Rozenberg, Igal; Schiff, Nicholas D; Stark, David E; Yan, Peter; Pratt, Hillel; Kosofsky, Barry E
2016-01-01
Physicians caring for children with serious acute neurologic disease must process overwhelming amounts of physiological and medical information. Strategies to optimize real time display of this information are understudied. Our goal was to engage clinical and engineering experts to develop guiding principles for creating a pediatric neurology intensive care unit (neuroPICU) monitor that integrates and displays data from multiple sources in an intuitive and informative manner. To accomplish this goal, an international group of physicians and engineers communicated regularly for one year. We integrated findings from clinical observations, interviews, a survey, signal processing, and visualization exercises to develop a concept for a neuroPICU display. Key conclusions from our efforts include: (1) A neuroPICU display should support (a) rapid review of retrospective time series (i.e. cardiac, pulmonary, and neurologic physiology data), (b) rapidly modifiable formats for viewing that data according to the specialty of the reviewer, and (c) communication of the degree of risk of clinical decline. (2) Specialized visualizations of physiologic parameters can highlight abnormalities in multivariable temporal data. Examples include 3-D stacked spider plots and color coded time series plots. (3) Visual summaries of EEG with spectral tools (i.e. hemispheric asymmetry and median power) can highlight seizures via patient-specific "fingerprints." (4) Intuitive displays should emphasize subsets of physiology and processed EEG data to provide a rapid gestalt of the current status and medical stability of a patient. A well-designed neuroPICU display must present multiple datasets in dynamic, flexible, and informative views to accommodate clinicians from multiple disciplines in a variety of clinical scenarios.
Multi-modal information processing for visual workload relief
NASA Technical Reports Server (NTRS)
Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.
1980-01-01
The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.
Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian
2007-01-01
The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.
Accommodative performance for chromatic displays.
Lovasik, J V; Kergoat, H
1988-01-01
Over the past few years, video display units (VDUs) have been incorporated into many varieties of workplaces and occupational demands. The success of electro-optical displays in facilitating and improving job performance has spawned interest in extracting further advantage from VDUs by incorporating colour coding into such communication systems. However, concerns have been raised about the effect of chromatic stimuli on the visual comfort and task efficiency, because of the chromatic aberration inherent in the optics of the human eye. In this study, we used a computer aided laser speckle optometer system to measure the accommodative responses to brightness-matched chromatic letters displayed on a high-resolution RGB monitor. Twenty, visually normal, paid volunteers in a 22-35 year age category served as subjects. Stimuli were 14, 21, 28 minutes of arc letters presented in a 'monochromatic' (white, red, green or blue, on a black background) or 'multichromatic' (blue-red, blue-green, red-green, foreground-background combinations) mode at 40 and 80 cm viewing distances. The results demonstrated that while the accommodative responses were strongly influenced by the foreground-background colour combination, the group-averaged dioptric difference across colours was relatively small. Further, accommodative responses were not guided in any systematic fashion by the size of letters presented for fixation. Implications of these findings for display designs are discussed.
Securing information display by use of visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2003-09-01
We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.
Google Glass Glare: disability glare produced by a head-mounted visual display.
Longley, Chris; Whitaker, David
2016-03-01
Head mounted displays are a type of wearable technology - a market that is projected to expand rapidly over the coming years. Probably the most well known example is the device Google Glass (or 'Glass'). Here we investigate the extent to which the device display can interfere with normal visual function by producing monocular disability glare. Contrast sensitivity was measured in two normally sighted participants, 32 and 52 years of age. Data were recorded for the right eye, the left eye and then again in a binocular condition. Measurements were taken both with and without the Glass in place, across a range of stimulus luminance levels using a two-alternative forced-choice methodology. The device produced a significant reduction in contrast sensitivity in the right eye (>0.5 log units). The level of disability glare increased as stimulus luminance was reduced in a manner consistent with intraocular light scatter, resulting in a veiling retinal illuminance. Sensitivity in the left eye was unaffected. A significant reduction in binocular contrast sensitivity occurred at lower luminance levels due to a loss of binocular summation, although binocular sensitivity was not found to fall below the sensitivity of the better monocular level (binocular inhibition). Head mounted displays such as Google Glass have the potential to cause significant disability glare in the eye exposed to the visual display, particularly under conditions of low luminance. They can also cause a more modest binocular reduction in sensitivity by eliminating the benefits of binocular summation. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.
Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H
2017-01-01
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
NASA Astrophysics Data System (ADS)
Doyon-Poulin, Philippe
Flight deck of 21st century commercial aircrafts does not look like the one the Wright brothers used for their first flight. The rapid growth of civilian aviation resulted in an increase in the number of flight deck instruments and of their complexity, in order to complete a safe and ontime flight. However, presenting an abundance of visual information using visually cluttered flight instruments might reduce the pilot's flight performance. Visual clutter has received an increased interest by the aerospace community to understand the effects of visual density and information overload on pilots' performance. Aerospace regulations demand to minimize visual clutter of flight deck displays. Past studies found a mixed effect of visual clutter of the primary flight display on pilots' technical flight performance. More research is needed to better understand this subject. In this thesis, we did an experimental study in a flight simulator to test the effects of visual clutter of the primary flight display on the pilot's technical flight performance, mental workload and gaze pattern. First, we identified a gap in existing definitions of visual clutter and we proposed a new definition relevant to the aerospace community that takes into account the context of use of the display. Then, we showed that past research on the effects of visual clutter of the primary flight display on pilots' performance did not manipulate the variable of visual clutter in a similar manner. Past research changed visual clutter at the same time than the flight guidance function. Using a different flight guidance function between displays might have masked the effect of visual clutter on pilots' performance. To solve this issue, we proposed three requirements that all tested displays must satisfy to assure that only the variable of visual clutter is changed during study while leaving other variables unaffected. Then, we designed three primary flight displays with a different visual clutter level (low, medium, high) but with the same flight guidance function, by respecting the previous requirements. Twelve pilots, with a mean experience of over 4000 total flight hours, completed an instrument landing in a flight simulator using all three displays for a total of nine repetitions. Our results showed that pilots reported lower workload level and had better lateral precision during the approach using the medium-clutter display compared to the low- and high-clutter displays. Also, pilots reported that the medium-clutter display was the most useful for the flight task compared to the two other displays. Eye tracker results showed that pilots' gaze pattern was less efficient for the high-clutter display compared to the low- and medium-clutter displays. Overall, these new experimental results emphasize the importance of optimizing visual clutter of flight displays as it affects both objective and subjective performance of experienced pilots in their flying task. This thesis ends with practical recommendations to help designers optimize visual clutter of displays used for man-machine interface.
The application of autostereoscopic display in smart home system based on mobile devices
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Ling, Zhi
2015-03-01
Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.
Texture-Based Correspondence Display
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael
2004-01-01
Texture-based correspondence display is a methodology to display corresponding data elements in visual representations of complex multidimensional, multivariate data. Texture is utilized as a persistent medium to contain a visual representation model and as a means to create multiple renditions of data where color is used to identify correspondence. Corresponding data elements are displayed over a variety of visual metaphors in a normal rendering process without adding extraneous linking metadata creation and maintenance. The effectiveness of visual representation for understanding data is extended to the expression of the visual representation model in texture.
Evaluation of an organic light-emitting diode display for precise visual stimulation.
Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji
2013-06-11
A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
Mobile visual communications and displays
NASA Astrophysics Data System (ADS)
Valliath, George T.
2004-09-01
The different types of mobile visual communication modes and the types of displays needed in cellular handsets are explored. The well-known 2-way video conferencing is only one of the possible modes. Some modes are already supported on current handsets while others need the arrival of advanced network capabilities to be supported. Displays for devices that support these visual communication modes need to deliver the required visual experience. Over the last 20 years the display has grown in size while the rest of the handset has shrunk. However, the display is still not large enough - the processor performance and network capabilities continue to outstrip the display ability. This makes the display a bottleneck. This paper will explore potential solutions to a small large image on a small handset.
Planning in sentence production: Evidence for the phrase as a default planning scope
Martin, Randi C.; Crowther, Jason E.; Knight, Meredith; Tamborello, Franklin P.; Yang, Chin-Lung
2010-01-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: 1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and 2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. PMID:20501338
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
Terminal weather information management
NASA Technical Reports Server (NTRS)
Lee, Alfred T.
1990-01-01
Since the mid-1960's, microburst/windshear events have caused at least 30 aircraft accidents and incidents and have killed more than 600 people in the United States alone. This study evaluated alternative means of alerting an airline crew to the presence of microburst/windshear events in the terminal area. Of particular interest was the relative effectiveness of conventional and data link ground-to-air transmissions of ground-based radar and low-level windshear sensing information on microburst/windshear avoidance. The Advanced Concepts Flight Simulator located at Ames Research Center was employed in a line oriented simulation of a scheduled round-trip airline flight from Salt Lake City to Denver Stapleton Airport. Actual weather en route and in the terminal area was simulated using recorded data. The microburst/windshear incident of July 11, 1988 was re-created for the Denver area operations. Six experienced airline crews currently flying scheduled routes were employed as test subjects for each of three groups: (1) A baseline group which received alerts via conventional air traffic control (ATC) tower transmissions; (2) An experimental group which received alerts/events displayed visually and aurally in the cockpit six miles (approx. 2 min.) from the microburst event; and (3) An additional experimental group received displayed alerts/events 23 linear miles (approx. 7 min.) from the microburst event. Analyses of crew communications and decision times showed a marked improvement in both situation awareness and decision-making with visually displayed ground-based radar information. Substantial reductions in the variability of decision times among crews in the visual display groups were also found. These findings suggest that crew performance will be enhanced and individual differences among crews due to differences in training and prior experience are significantly reduced by providing real-time, graphic display of terminal weather hazards.
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2014 CFR
2014-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2010 CFR
2010-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
7 CFR 8.8 - Use by public informational services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... services. (a) In any advertisement, display, exhibit, visual and audio-visual material, news release..., news releases, publications in any form, visuals and audio-visuals, or displays in any form must not... agency, organization or individual, for production of films, visual and audio-visual materials, books...
Wilkinson, Krista M; Light, Janice; Drager, Kathryn
2012-09-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing - that is, how a user attends, perceives, and makes sense of the visual information on the display - therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations.
Presentation of Information on Visual Displays.
ERIC Educational Resources Information Center
Pettersson, Rune
This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…
Visual Displays and Contextual Presentations in Computer-Based Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon
1998-01-01
Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tendick, Frank; Stark, Lawrence
1989-01-01
A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements.
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
NASA Technical Reports Server (NTRS)
Head, James W.; Huffman, J. N.; Forsberg, A. S.; Hurwitz, D. M.; Basilevsky, A. T.; Ivanov, M. A.; Dickson, J. L.; Kumar, P. Senthil
2008-01-01
We are currently investigating new technological developments in computer visualization and analysis in order to assess their importance and utility in planetary geological analysis and mapping [1,2]. Last year we reported on the range of technologies available and on our application of these to various problems in planetary mapping [3]. In this contribution we focus on the application of these techniques and tools to Venus geological mapping at the 1:5M quadrangle scale. In our current Venus mapping projects we have utilized and tested the various platforms to understand their capabilities and assess their usefulness in defining units, establishing stratigraphic relationships, mapping structures, reaching consensus on interpretations and producing map products. We are specifically assessing how computer visualization display qualities (e.g., level of immersion, stereoscopic vs. monoscopic viewing, field of view, large vs. small display size, etc.) influence performance on scientific analysis and geological mapping. We have been exploring four different environments: 1) conventional desktops (DT), 2) semi-immersive Fishtank VR (FT) (i.e., a conventional desktop with head-tracked stereo and 6DOF input), 3) tiled wall displays (TW), and 4) fully immersive virtual reality (IVR) (e.g., "Cave Automatic Virtual Environment," or Cave system). Formal studies demonstrate that fully immersive Cave environments are superior to desktop systems for many tasks [e.g., 4].
Nanda, U; Eisen, S; Zadeh, R S; Owen, D
2011-06-01
There is a growing body of evidence on the impact of the environment on health and well-being. This study focuses on the impact of visual artworks on the well-being of psychiatric patients in a multi-purpose lounge of an acute care psychiatric unit. Well-being was measured by the rate of pro re nata (PRN) medication issued by nurses in response to visible signs of patient anxiety and agitation. Nurses were interviewed to get qualitative feedback on the patient response. Findings revealed that the ratio of PRN/patient census was significantly lower on the days when a realistic nature photograph was displayed, compared to the control condition (no art) and abstract art. Nurses reported that some patients displayed agitated behaviour in response to the abstract image. This study makes a case for the impact of visual art on mental well-being. The research findings were also translated into the time and money invested on PRN incidents, and annual cost savings of almost $US30,000 a year was projected. This research makes a case that simple environmental interventions like visual art can save the hospital costs of medication, and staff and pharmacy time, by providing a visual distraction that can alleviate anxiety and agitation in patients. © 2010 Blackwell Publishing.
2016-11-01
Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods
Long-term effects on symptoms by reducing electric fields from visual display units.
Oftedal, G; Nyvang, A; Moen, B E
1999-10-01
The purpose of the study was to see whether the results of an earlier study [ie, that skin symptoms were reduced by reducing electric fields from visual display units (VDU)] could be reproduced or not. In addition, an attempt was made to determine whether eye symptoms and symptoms from the nervous system could be reduced by reducing VDU electric fields. The study was designed as a controlled double-blind intervention. The electric fields were reduced by using electric-conducting screen filters. Forty-two persons completed the study while working at their ordinary job, first 1 week with no filter, then 3 months with an inactive filter and then 3 months with an active filter (or in reverse order). The inactive filters were identical to the active ones, except that their ground cables were replaced by empty plastic insulation. The inactive filters did not reduce the fields from the VDU. The fields were significantly lower with active filters than with inactive filters. Most of the symptoms were statistically significantly less pronounced in the periods with the filters when compared with the period with no filter. This finding can be explained by visual effects and psychological effects. No statistically significant difference in symptom severeness was observed between the period with an inactive filter and the one with an active filter. The study does not support the hypothesis that skin, eye, or nervous system symptoms can be reduced by reducing VDU electric fields.
Measurement of luminance and color uniformity of displays using the large-format scanner
NASA Astrophysics Data System (ADS)
Mazikowski, Adam
2017-08-01
Uniformity of display luminance and color is important for comfort and good perception of the information presented on the display. Although display technology has developed and improved a lot over the past years, different types of displays still present a challenge in selected applications, e.g. in medical use or in case of multi-screen installations. A simplified 9-point method of determining uniformity does not always produce satisfactory results, so a different solution is proposed in the paper. The developed system consists of the large-format X-Y-Z ISEL scanner (isel Germany AG), Konica Minolta high sensitivity spot photometer-colorimeter (e.g. CS-200, Konica Minolta, Inc.) and PC computer. Dedicated software in LabView environment for control of the scanner, transfer the measured data to the computer, and visualization of measurement results was also prepared. Based on the developed setup measurements of plasma display and LCD-LED display were performed. A heavily wornout plasma TV unit, with several artifacts visible was selected. These tests show the advantages and drawbacks of described scanning method with comparison with 9-point simplified uniformity determining method.
Secure information display with limited viewing zone by use of multi-color visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2004-04-05
We propose a display technique that ensures security of visual information by use of visual cryptography. A displayed image appears as a completely random pattern unless viewed through a decoding mask. The display has a limited viewing zone with the decoding mask. We have developed a multi-color encryption code set. Eight colors are represented in combinations of a displayed image composed of red, green, blue, and black subpixels and a decoding mask composed of transparent and opaque subpixels. Furthermore, we have demonstrated secure information display by use of an LCD panel.
Measuring Visual Displays’ Effect on Novice Performance in Door Gunnery
2014-12-01
training in a mixed reality simulation. Specifically, we examined the effect that different visual displays had on novice soldier performance; qualified...there was a main effect of visual display on performance. However, both visual treatment groups experienced the same degree of presence and simulator... The purpose of this paper is to present the results of our recent experimentation involving a novice population performing aerial door gunnery
Wilkinson, Krista M.; Light, Janice; Drager, Kathryn
2013-01-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989
Conceptual design study for a teleoperator visual system, phase 2
NASA Technical Reports Server (NTRS)
Grant, C.; Meirick, R.; Polhemus, C.; Spencer, R.; Swain, D.; Twell, R.
1973-01-01
An analysis of the concept for the hybrid stereo-monoscopic television visual system is reported. The visual concept is described along with the following subsystems: illumination, deployment/articulation, telecommunications, visual displays, and the controls and display station.
Magnifying visual target information and the role of eye movements in motor sequence learning.
Massing, Matthias; Blandin, Yannick; Panzer, Stefan
2016-01-01
An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.
Studying Visual Displays: How to Instructionally Support Learning
ERIC Educational Resources Information Center
Renkl, Alexander; Scheiter, Katharina
2017-01-01
Visual displays are very frequently used in learning materials. Although visual displays have great potential to foster learning, they also pose substantial demands on learners so that the actual learning outcomes are often disappointing. In this article, we pursue three main goals. First, we identify the main difficulties that learners have when…
Automated Analysis, Classification, and Display of Waveforms
NASA Technical Reports Server (NTRS)
Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff
2004-01-01
A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.
Reconfigurable work station for a video display unit and keyboard
NASA Technical Reports Server (NTRS)
Shields, Nicholas L. (Inventor); Roe, Fred D., Jr. (Inventor); Fagg, Mary F. (Inventor); Henderson, David E. (Inventor)
1988-01-01
A reconfigurable workstation is described having video, keyboard, and hand operated motion controller capabilities. The workstation includes main side panels between which a primary work panel is pivotally carried in a manner in which the primary work panel may be adjusted and set in a negatively declined or positively inclined position for proper forearm support when operating hand controllers. A keyboard table supports a keyboard in such a manner that the keyboard is set in a positively inclined position with respect to the negatively declined work panel. Various adjustable devices are provided for adjusting the relative declinations and inclinations of the work panels, tables, and visual display panels.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.
1979-01-01
The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Performance evaluation of a kinesthetic-tactual display
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.; Dunn, R. S.
1982-01-01
Simulator studies demonstrated the feasibility of using kinesthetic-tactual (KT) displays for providing collective and cyclic command information, and suggested that KT displays may increase pilot workload capability. A dual-axis laboratory tracking task suggested that beyond reduction in visual scanning, there may be additional sensory or cognitive benefits to the use of multiple sensory modalities. Single-axis laboratory tracking tasks revealed performance with a quickened KT display to be equivalent to performance with a quickened visual display for a low frequency sum-of-sinewaves input. In contrast, an unquickened KT display was inferior to an unquickened visual display. Full scale simulator studies and/or inflight testing are recommended to determine the generality of these results.
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L
2018-06-21
Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.
Thurlow, W R
1980-01-01
Messages were presented which moved from right to left along an electronic alphabetic display which was varied in "window" size from 4 through 32 letter spaces. Deaf subjects signed the messages they perceived. Relatively few errors were made even at the highest rate of presentation, which corresponded to a typing rate of 60 words/min. It is concluded that many deaf persons can make effective use of a small visual display. A reduced cost is then possible for visual communication instruments for these people through reduced display size. Deaf subjects who can profit from a small display can be located by a sentence test administered by tape recorder which drives the display of the communication device by means of the standard code of the deaf teletype network.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency.
Palmisano, Stephen; Riecke, Bernhard E
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ("vection"). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency
Riecke, Bernhard E.
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion (“vection”). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost. PMID:29791445
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Planning in sentence production: evidence for the phrase as a default planning scope.
Martin, Randi C; Crowther, Jason E; Knight, Meredith; Tamborello, Franklin P; Yang, Chin-Lung
2010-08-01
Controversy remains as to the scope of advanced planning in language production. Smith and Wheeldon (1999) found significantly longer onset latencies when subjects described moving-picture displays by producing sentences beginning with a complex noun phrase than for matched sentences beginning with a simple noun phrase. While these findings are consistent with a phrasal scope of planning, they might also be explained on the basis of: (1) greater retrieval fluency for the second content word in the simple initial noun phrase sentences and (2) visual grouping factors. In Experiments 1 and 2, retrieval fluency for the second content word was equated for the complex and simple initial noun phrase conditions. Experiments 3 and 4 addressed the visual grouping hypothesis by using stationary displays and by comparing onset latencies for the same display for sentence and list productions. Longer onset latencies for the sentences beginning with a complex noun phrase were obtained in all experiments, supporting the phrasal scope of planning hypothesis. The results indicate that in speech, as in other motor production domains, planning occurs beyond the minimal production unit. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney
2015-01-01
Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.
Visualization and recommendation of large image collections toward effective sensemaking
NASA Astrophysics Data System (ADS)
Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis
2016-03-01
In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.
Banta, Edward R.; Provost, Alden M.
2008-01-01
This report documents HUFPrint, a computer program that extracts and displays information about model structure and hydraulic properties from the input data for a model built using the Hydrogeologic-Unit Flow (HUF) Package of the U.S. Geological Survey's MODFLOW program for modeling ground-water flow. HUFPrint reads the HUF Package and other MODFLOW input files, processes the data by hydrogeologic unit and by model layer, and generates text and graphics files useful for visualizing the data or for further processing. For hydrogeologic units, HUFPrint outputs such hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, vertical hydraulic conductivity or anisotropy, specific storage, specific yield, and hydraulic-conductivity depth-dependence coefficient. For model layers, HUFPrint outputs such effective hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, specific storage, primary direction of anisotropy, and vertical conductance. Text files tabulating hydraulic properties by hydrogeologic unit, by model layer, or in a specified vertical section may be generated. Graphics showing two-dimensional cross sections and one-dimensional vertical sections at specified locations also may be generated. HUFPrint reads input files designed for MODFLOW-2000 or MODFLOW-2005.
2007-12-01
37 3. Poka - yoke ............................................................................................37 4. Systems for...Standard operating procedures • Visual displays for workflow and communication • Total productive maintenance • Poka - yoke techniques to prevent...process step or eliminating non-value-added steps, and reducing the seven common wastes, will decrease the total time of a process. 3. Poka - yoke
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
How Visual Displays Affect Cognitive Processing
ERIC Educational Resources Information Center
McCrudden, Matthew T.; Rapp, David N.
2017-01-01
We regularly consult and construct visual displays that are intended to communicate important information. The power of these displays and the instructional messages we attempt to comprehend when using them emerge from the information included in the display and by their spatial arrangement. In this article, we identify common types of visual…
Evaluation of kinesthetic-tactual displays using a critical tracking task
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.
1977-01-01
The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.
Effective color design for displays
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
2002-06-01
Visual communication is a key aspect of human-computer interaction, which contributes to the satisfaction of user and application needs. For effective design of presentations on computer displays, color should be used in conjunction with the other visual variables. The general needs of graphic user interfaces are discussed, followed by five specific tasks with differing criteria for display color specification - advertising, text, information, visualization and imaging.
Use of nontraditional flight displays for the reduction of central visual overload in the cockpit
NASA Technical Reports Server (NTRS)
Weinstein, Lisa F.; Wickens, Christopher D.
1992-01-01
The use of nontraditional flight displays to reduce visual overload in the cockpit was investigated in a dual-task paradigm. Three flight displays (central, peripheral, and ecological) were used between subjects for the primary tasks, and the type of secondary task (object identification or motion judgment) and the presentation of the location of the task in the visual field (central or peripheral) were manipulated with groups. The two visual-spatial tasks were time-shared to study the possibility of a compatibility mapping between task type and task location. The ecological display was found to allow for the most efficient time-sharing.
A Visual Editor in Java for View
NASA Technical Reports Server (NTRS)
Stansifer, Ryan
2000-01-01
In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.
Liu, Yung-Ching; Jhuang, Jing-Wun
2012-07-01
A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems
1991-09-01
reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI
Multimodal Floral Signals and Moth Foraging Decisions
Riffell, Jeffrey A.; Alarcón, Ruben
2013-01-01
Background Combinations of floral traits – which operate as attractive signals to pollinators – act on multiple sensory modalities. For Manduca sexta hawkmoths, how learning modifies foraging decisions in response to those traits remains untested, and the contribution of visual and olfactory floral displays on behavior remains unclear. Methodology/Principal Findings Using M. sexta and the floral traits of two important nectar resources in southwestern USA, Datura wrightii and Agave palmeri, we examined the relative importance of olfactory and visual signals. Natural visual and olfactory cues from D. wrightii and A. palmeri flowers permits testing the cues at their native intensities and composition – a contrast to many studies that have used artificial stimuli (essential oils, single odorants) that are less ecologically relevant. Results from a series of two-choice assays where the olfactory and visual floral displays were manipulated showed that naïve hawkmoths preferred flowers displaying both olfactory and visual cues. Furthermore, experiments using A. palmeri flowers – a species that is not very attractive to hawkmoths – showed that the visual and olfactory displays did not have synergistic effects. The combination of olfactory and visual display of D. wrightii, however – a flower that is highly attractive to naïve hawkmoths – did influence the time moths spent feeding from the flowers. The importance of the olfactory and visual signals were further demonstrated in learning experiments in which experienced moths, when exposed to uncoupled floral displays, ultimately chose flowers based on the previously experienced olfactory, and not visual, signals. These moths, however, had significantly longer decision times than moths exposed to coupled floral displays. Conclusions/Significance These results highlight the importance of specific sensory modalities for foraging hawkmoths while also suggesting that they learn the floral displays as combinatorial signals and use the integrated floral traits from their memory traces to mediate future foraging decisions. PMID:23991154
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Influence of high ambient illuminance and display luminance on readability and subjective preference
NASA Astrophysics Data System (ADS)
De Moor, Katrien; Andrén, Börje; Guo, Yi; Brunnström, Kjell; Wang, Kun; Drott, Anton; Hermann, David S.
2015-03-01
Many devices, such as tablets, smartphones, notebooks, fixed and portable navigation systems are used on a (nearly) daily basis, both in in- and outdoor environments. It is often argued that contextual factors, such as the ambient illuminance in relation to characteristics of the display (e.g., surface treatment, screen reflectance, display luminance …) may have a strong influence on the use of such devices and corresponding user experiences. However, the current understanding of these influence factors is still rather limited. In this work, we therefore focus in particular on the impact of lighting and display luminance on readability, visual performance, subjective experience and preference. A controlled lab study (N=18) with a within-subjects design was performed to evaluate two car displays (one glossy and one matte display) in conditions that simulate bright outdoor lighting conditions. Four ambient luminance levels and three display luminance settings were combined into 7 experimental conditions. More concretely, we investigated for each display: (1) whether and how readability and visual performance varied with the different combinations of ambient luminance and display luminance and (2) whether and how they influenced the subjective experience (through self-reported valence, annoyance, visual fatigue) and preference. The results indicate a limited, yet negative influence of increased ambient luminance and reduced contrast on visual performance and readability for both displays. Similarly, we found that the self-reported valence decreases and annoyance and visual fatigue increase as the contrast ratio decreases and ambient luminance increases. Overall, the impact is clearer for the matte display than for the glossy display.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Effects of translational and rotational motions and display polarity on visual performance.
Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe
2008-10-01
This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.
Handa, T; Ishikawa, H; Shimizu, K; Kawamura, R; Nakayama, H; Sawada, K
2009-11-01
Virtual reality has recently been highlighted as a promising medium for visual presentation and entertainment. A novel apparatus for testing binocular visual function using a hemispherical visual display system, 'CyberDome', has been developed and tested. Subjects comprised 40 volunteers (mean age, 21.63 years) with corrected visual acuity of -0.08 (LogMAR) or better, and stereoacuity better than 100 s of arc on the Titmus stereo test. Subjects were able to experience visual perception like being surrounded by visual images, a feature of the 'CyberDome' hemispherical visual display system. Visual images to the right and left eyes were projected and superimposed on the dome screen, allowing test images to be seen independently by each eye using polarizing glasses. The hemispherical visual display was 1.4 m in diameter. Three test parameters were evaluated: simultaneous perception (subjective angle of strabismus), motor fusion amplitude (convergence and divergence), and stereopsis (binocular disparity at 1260, 840, and 420 s of arc). Testing was performed in volunteer subjects with normal binocular vision, and results were compared with those using a major amblyoscope. Subjective angle of strabismus and motor fusion amplitude showed a significant correlation between our test and the major amblyoscope. All subjects could perceive the stereoscopic target with a binocular disparity of 480 s of arc. Our novel apparatus using the CyberDome, a hemispherical visual display system, was able to quantitatively evaluate binocular function. This apparatus offers clinical promise in the evaluation of binocular function.
Conceptual design study for an advanced cab and visual system, volume 1
NASA Technical Reports Server (NTRS)
Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.
1980-01-01
A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.
Competing Distractors Facilitate Visual Search in Heterogeneous Displays.
Kong, Garry; Alais, David; Van der Burg, Erik
2016-01-01
In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments.
Projection-type see-through holographic three-dimensional display
NASA Astrophysics Data System (ADS)
Wakunami, Koki; Hsieh, Po-Yuan; Oi, Ryutaro; Senoh, Takanori; Sasaki, Hisayuki; Ichihashi, Yasuyuki; Okui, Makoto; Huang, Yi-Pai; Yamamoto, Kenji
2016-10-01
Owing to the limited spatio-temporal resolution of display devices, dynamic holographic three-dimensional displays suffer from a critical trade-off between the display size and the visual angle. Here we show a projection-type holographic three-dimensional display, in which a digitally designed holographic optical element and a digital holographic projection technique are combined to increase both factors at the same time. In the experiment, the enlarged holographic image, which is twice as large as the original display device, projected on the screen of the digitally designed holographic optical element was concentrated at the target observation area so as to increase the visual angle, which is six times as large as that for a general holographic display. Because the display size and the visual angle can be designed independently, the proposed system will accelerate the adoption of holographic three-dimensional displays in industrial applications, such as digital signage, in-car head-up displays, smart-glasses and head-mounted displays.
Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2012-10-25
In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.
Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P
2017-04-01
A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Visual supports for shared reading with young children: the effect of static overlay design.
Wood Jackson, Carla; Wahlquist, Jordan; Marquis, Cassandra
2011-06-01
This study examined the effects of two types of static overlay design (visual scene display and grid display) on 39 children's use of a speech-generating device during shared storybook reading with an adult. This pilot project included two groups: preschool children with typical communication skills (n = 26) and with complex communication needs (n = 13). All participants engaged in shared reading with two books using each visual layout on a speech-generating device (SGD). The children averaged a greater number of activations when presented with a grid display during introductory exploration and free play. There was a large effect of the static overlay design on the number of silent hits, evidencing more silent hits with visual scene displays. On average, the children demonstrated relatively few spontaneous activations of the speech-generating device while the adult was reading, regardless of overlay design. When responding to questions, children with communication needs appeared to perform better when using visual scene displays, but the effect of display condition on the accuracy of responses to wh-questions was not statistically significant. In response to an open ended question, children with communication disorders demonstrated more frequent activations of the SGD using a grid display than a visual scene. Suggestions for future research as well as potential implications for designing AAC systems for shared reading with young children are discussed.
Reeder, B; Chung, J; Le, T; Thompson, H; Demiris, G
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Our objectives were to: 1) characterize older adult participants' perceived usefulness of in-home sensor data and 2) develop novel visual displays for sensor data from Ambient Assisted Living environments that can become part of electronic health records. Semi-structured interviews were conducted with community-dwelling older adult participants during three and six-month visits. We engaged participants in two design iterations by soliciting feedback about display types and visual displays of simulated data related to a fall scenario. Interview transcripts were analyzed to identify themes related to perceived usefulness of sensor data. Thematic analysis identified three themes: perceived usefulness of sensor data for managing health; factors that affect perceived usefulness of sensor data and; perceived usefulness of visual displays. Visual displays were cited as potentially useful for family members and health care providers. Three novel visual displays were created based on interview results, design guidelines derived from prior AAL research, and principles of graphic design theory. Participants identified potential uses of personal activity data for monitoring health status and capturing early signs of illness. One area for future research is to determine how visual displays of AAL data might be utilized to connect family members and health care providers through shared understanding of activity levels versus a more simplified view of self-management. Connecting informal and formal caregiving networks may facilitate better communication between older adults, family members and health care providers for shared decision-making.
Computer and visual display terminals (VDT) vision syndrome (CVDTS).
Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S
2016-07-01
Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.
Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16
1977-08-05
34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of
3D Visualizations of Abstract DataSets
2010-08-01
contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract
Yamasaki, Toshiki; Moritake, Kouzo; Nagai, Hidemasa; Kimura, Yoriyoshi
2002-06-01
A technique to integrate ultrasonography and endoscopy is described for transsphenoidal surgery to prevent intraoperative internal carotid artery (ICA)-related, life-threatening complications such as aneurysmal formation and carotid-cavernous fistula. The ultrasound unit helps avoid direct injury to the ICA. The technical advantage of this system is the miniature 1-mm diameter microvascular probe, which does not disturb the operative field. An arterial or venous flow source of even an invisible vessel can be detected easily, noninvasively, and reproducibly. Real-time information with a 100% detection rate for the ICA is helpful for predicting localization even in the intracavernous portion, where the ICA is invisible. The endoscope unit can visualize the dead angle areas of the operating microscope by varying the endoscopic gateways and display on a "picture-in-picture" system. The advantage of both devices is the integration with a video processor, so that the real-time information from each unit can be switched intraoperatively onto the display as required. This method is of particular help for removing lesions with intracavernous invasion or encasement of the ICA.
A visual-display and storage device
NASA Technical Reports Server (NTRS)
Bosomworth, D. R.; Moles, W. H.
1972-01-01
Memory and display device uses cathodochromic material to store visual information and fast phosphor to recall information for display and electronic processing. Cathodochromic material changes color when bombarded with electrons, and is restored to its original color when exposed to light of appropiate wavelength.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Visual search in a forced-choice paradigm
NASA Technical Reports Server (NTRS)
Holmgren, J. E.
1974-01-01
The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.
Head Worn Display System for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob
2009-01-01
Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.
Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.
Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung
2010-01-01
The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.
Are New Image Quality Figures of Merit Needed for Flat Panel Displays?
1998-06-01
American National Standard for Human Factors Engineering of Visual Display Terminal Workstations in 1988 have adopted the MTFA as the standard...References American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI/HFS 100-1988). 1988. Santa Monica
Lighting Control System for Premises with Display Screen Equipment
NASA Astrophysics Data System (ADS)
Kudryashov, A. V.
2017-11-01
The use of Display Screen Equipment (DSE) at enterprises allows one to increase the productivity and safety of production, minimize the number of personnel and leads to the simplification of the work of specialists, but on the other side, changes usual working conditions. If the personnel works with displays, visual fatigue develops more quickly which contributes to the emergence of nervous tension, stress and possible erroneous actions. Low interest of the lighting control system developers towards the rooms with displays is dictated by special requirements for coverage by sanitary and hygienic standards (limiting excess workplace illumination). We decided to create a combined lighting system which works considering daylight illumination and artificial light sources. The brightness adjustment of the LED lamps is carried out according to the DALI protocol, adjustment of the natural illumination by means of smart glasses. The technical requirements for a lighting control system, the structural-functional scheme and the algorithm for controlling the operation of the system have been developed. The elements of control units, sensors and actuators have been selected.
NASA Technical Reports Server (NTRS)
Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)
1995-01-01
NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).
Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry
O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte
2013-01-01
Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536
Hegarty, Mary; Canham, Matt S; Fabrikant, Sara I
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, A; Seabrook, G; Brakken, A
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less
Verbal Modification via Visual Display
ERIC Educational Resources Information Center
Richmond, Edmun B.; Wallace-Childers, La Donna
1977-01-01
The inability of foreign language students to produce acceptable approximations of new vowel sounds initiated a study to devise a real-time visual display system whereby the students could match vowel production to a visual pedagogical model. The system used amateur radio equipment and a standard oscilloscope. (CHK)
Benedetto, Simone; Drai-Zerbib, Véronique; Pedrotti, Marco; Tissier, Geoffrey; Baccino, Thierry
2013-01-01
The mass digitization of books is changing the way information is created, disseminated and displayed. Electronic book readers (e-readers) generally refer to two main display technologies: the electronic ink (E-ink) and the liquid crystal display (LCD). Both technologies have advantages and disadvantages, but the question whether one or the other triggers less visual fatigue is still open. The aim of the present research was to study the effects of the display technology on visual fatigue. To this end, participants performed a longitudinal study in which two last generation e-readers (LCD, E-ink) and paper book were tested in three different prolonged reading sessions separated by - on average - ten days. Results from both objective (Blinks per second) and subjective (Visual Fatigue Scale) measures suggested that reading on the LCD (Kindle Fire HD) triggers higher visual fatigue with respect to both the E-ink (Kindle Paperwhite) and the paper book. The absence of differences between E-ink and paper suggests that, concerning visual fatigue, the E-ink is indeed very similar to the paper. PMID:24386252
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
IViPP: A Tool for Visualization in Particle Physics
NASA Astrophysics Data System (ADS)
Tran, Hieu; Skiba, Elizabeth; Baldwin, Doug
2011-10-01
Experiments and simulations in physics generate a lot of data; visualization is helpful to prepare that data for analysis. IViPP (Interactive Visualizations in Particle Physics) is an interactive computer program that visualizes results of particle physics simulations or experiments. IViPP can handle data from different simulators, such as SRIM or MCNP. It can display relevant geometry and measured scalar data; it can do simple selection from the visualized data. In order to be an effective visualization tool, IViPP must have a software architecture that can flexibly adapt to new data sources and display styles. It must be able to display complicated geometry and measured data with a high dynamic range. We therefore organize it in a highly modular structure, we develop libraries to describe geometry algorithmically, use rendering algorithms running on the powerful GPU to display 3-D geometry at interactive rates, and we represent scalar values in a visual form of scientific notation that shows both mantissa and exponent. This work was supported in part by the US Department of Energy through the Laboratory for Laser Energetics (LLE), with special thanks to Craig Sangster at LLE.
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Light, Janice
2011-01-01
Purpose: Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs.…
Asymmetric top-down modulation of ascending visual pathways in pigeons.
Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur
2016-03-01
Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.
Large public display boards: a case study of an OR board and design implications.
Lasome, C E; Xiao, Y
2001-01-01
A compelling reason for studying artifacts in collaborative work is to inform design. We present a case study of a public display board (12 ft by 4 ft) in a Level-I trauma center operating room (OR) unit. The board has evolved into a sophisticated coordination tool for clinicians and supporting personnel. This paper draws on study findings about how the OR board is used and organizes the findings into three areas: (1) visual and physical properties of the board that are exploited for collaboration, (2) purposes the board was configured to serve, and (3) types of physical and perceptual interaction with the board. Findings and implications related to layout, size, flexibility, task management, problem-solving, resourcing, shared awareness, and communication are discussed in an effort to propose guidelines to facilitate the design of electronic, computer driven display boards in the OR environment.
ERIC Educational Resources Information Center
Koeninger, Jimmy G.
The instructional package was developed to provide the distributive education teacher-coordinator with visual materials that can be used to supplement existing textbook offerings in the area of display (visual merchandising). Designed for use with 35mm slides of retail store displays, the package allows the student to view the slides of displays…
2013-08-01
position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...the presence of large volumes of time critical information. CPOF was designed to support the Army transformation to network-enabled operations. The...Cognitive Performance The visual display of information is vital to cognitive performance. For example, the poor visual design of the radar display
Visual search asymmetries within color-coded and intensity-coded displays.
Yamani, Yusuke; McCarley, Jason S
2010-06-01
Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
A Novel Web Application to Analyze and Visualize Extreme Heat Events
NASA Astrophysics Data System (ADS)
Li, G.; Jones, H.; Trtanj, J.
2016-12-01
Extreme heat is the leading cause of weather-related deaths in the United States annually and is expected to increase with our warming climate. However, most of these deaths are preventable with proper tools and services to inform the public about heat waves. In this project, we have investigated the key indicators of a heat wave, the vulnerable populations, and the data visualization strategies of how those populations most effectively absorb heat wave data. A map-based web app has been created that allows users to search and visualize historical heat waves in the United States incorporating these strategies. This app utilizes daily maximum temperature data from NOAA Global Historical Climatology Network which contains about 2.7 million data points from over 7,000 stations per year. The point data are spatially aggregated into county-level data using county geometry from US Census Bureau and stored in Postgres database with PostGIS spatial capability. GeoServer, a powerful map server, is used to serve the image and data layers (WMS and WFS). The JavaScript-based web-mapping platform Leaflet is used to display the temperature layers. A number of functions have been implemented for the search and display. Users can search for extreme heat events by county or by date. The "by date" option allows a user to select a date and a Tmax threshold which then highlights all of the areas on the map that meet those date and temperature parameters. The "by county" option allows the user to select a county on the map which then retrieves a list of heat wave dates and daily Tmax measurements. This visualization is clean, user-friendly, and novel because while this sort of time, space, and temperature measurements can be found by querying meteorological datasets, there does not exist a tool that neatly packages this information together in an easily accessible and non-technical manner, especially in a time where climate change urges a better understanding of heat waves.
Component-Based Visualization System
NASA Technical Reports Server (NTRS)
Delgado, Francisco
2005-01-01
A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.
Concept, design and analysis of a large format autostereoscopic display system
NASA Astrophysics Data System (ADS)
Knocke, F.; de Jongh, R.; Frömel, M.
2005-09-01
Autostereoscopic display devices with large visual field are of importance in a number of applications such as computer aided design projects, technical education, and military command systems. Typical requirements for such systems are, aside from the large visual field, a large viewing zone, a high level of image brightness, and an extended depth of field. Additional appliances such as specialized eyeglasses or head-trackers are disadvantageous for the aforementioned applications. We report on the design and prototyping of an autostereoscopic display system on the basis of projection-type one-step unidirectional holography. The prototype consists of a hologram holder, an illumination unit, and a special direction-selective screen. Reconstruction light is provided by a 2W frequency-doubled Nd:YVO4 laser. The production of stereoscopic hologram stripes on photopolymer is carried out on a special origination setup. The prototype has a screen size of 180cm × 90cm and provides a visual field of 29° when viewed from 3.6 meters. Due to the coherent reconstruction, a depth of field of several meters is achievable. Up to 18 hologram stripes can be arranged on the holder to permit a rapid switch between a series of motifs or views. Both computer generated image sequences and digital camera photos may serve as input frames. However, a comprehensive pre-distortion must be performed in order to account for optical distortion and several other geometrical factors. The corresponding computations are briefly summarized below. The performance of the system is analyzed, aspects of beam-shaping and mechanical design are discussed and photographs of early reconstructions are presented.
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
The impact of visual scanning in the laparoscopic environment after engaging in strain coping.
Klein, Martina I; DeLucia, Patricia R; Olmstead, Ryan
2013-06-01
We aimed to determine whether visual scanning has a detrimental impact on the monitoring of critical signals and the performance of a concurrent laparoscopic training task after participants engaged in Hockey's strain coping. Strain coping refers to straining cognitive (attentional) resources joined with latent decrements (i.e., stress). DeLucia and Betts (2008) reported that monitoring critical signals degraded performance of a laparoscopic peg-reversal task compared with no monitoring. However, performance did not differ between displays in which critical signals were shown on split screens (less visual scanning) and separated displays (more visual scanning). We hypothesized that effects of scanning may occur after prolonged strain coping. Using a between-subjects design, we had undergraduates perform a laparoscopic training task that induced strain coping. Then they performed a laparoscopic peg-reversal task while monitoring critical signals with a split-screen or separated display. We administered the NASA-Task Load Index (TLX) and Dundee Stress State Questionnaire (DSSQ) to assess strain coping. The TLX and DSSQ profiles indicated that participants engaged in strain coping. Monitoring critical signals resulted in slowed peg-reversal performance compared with no monitoring. Separated displays degraded critical-signal monitoring compared with split-screen displays. After novice observers experience strain coping, visual scanning can impair the detection of critical signals. Results suggest that the design and arrangement of displays in the operating room must incorporate the attentional limitations of the surgeon. Designs that induce visual scanning may impair monitoring of critical information at least in novices. Presenting displays closely in space may be beneficial.
Are Current Insulin Pumps Accessible to Blind and Visually Impaired People?
Burton, Darren M.; Uslan, Mark M.; Blubaugh, Morgan V.; Clements, Charles W.
2009-01-01
Background In 2004, Uslan and colleagues determined that insulin pumps (IPs) on the market were largely inaccessible to blind and visually impaired persons. The objective of this study is to determine if accessibility status changed in the ensuing 4 years. Methods Five IPs on the market in 2008 were acquired and analyzed for key accessibility traits such as speech and other audio output, tactual nature of control buttons, and the quality of visual displays. It was also determined whether or not a blind or visually impaired person could independently complete tasks such as programming the IP for insulin delivery, replacing batteries, and reading manuals and other documentation. Results It was found that IPs have not improved in accessibility since 2004. None have speech output, and with the exception of the Animas IR 2020, no significantly improved visual display characteristics were found. Documentation is still not completely accessible. Conclusion Insulin pumps are relatively complex devices, with serious health consequences resulting from improper use. For IPs to be used safely and independently by blind and visually impaired patients, they must include voice output to communicate all the information presented on their display screens. Enhancing display contrast and the size of the displayed information would also improve accessibility for visually impaired users. The IPs must also come with accessible user documentation in alternate formats. PMID:20144301
Are current insulin pumps accessible to blind and visually impaired people?
Burton, Darren M; Uslan, Mark M; Blubaugh, Morgan V; Clements, Charles W
2009-05-01
In 2004, Uslan and colleagues determined that insulin pumps (IPs) on the market were largely inaccessible to blind and visually impaired persons. The objective of this study is to determine if accessibility status changed in the ensuing 4 years. Five IPs on the market in 2008 were acquired and analyzed for key accessibility traits such as speech and other audio output, tactual nature of control buttons, and the quality of visual displays. It was also determined whether or not a blind or visually impaired person could independently complete tasks such as programming the IP for insulin delivery, replacing batteries, and reading manuals and other documentation. It was found that IPs have not improved in accessibility since 2004. None have speech output, and with the exception of the Animas IR 2020, no significantly improved visual display characteristics were found. Documentation is still not completely accessible. Insulin pumps are relatively complex devices, with serious health consequences resulting from improper use. For IPs to be used safely and independently by blind and visually impaired patients, they must include voice output to communicate all the information presented on their display screens. Enhancing display contrast and the size of the displayed information would also improve accessibility for visually impaired users. The IPs must also come with accessible user documentation in alternate formats. 2009 Diabetes Technology Society.
High performance visual display for HENP detectors
NASA Astrophysics Data System (ADS)
McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel
2001-08-01
A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.
Support for fast comprehension of ICU data: visualization using metaphor graphics.
Horn, W; Popow, C; Unterasinger, L
2001-01-01
The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.
Review of fluorescence guided surgery visualization and overlay techniques
Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2015-01-01
In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628
Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays
ERIC Educational Resources Information Center
Yamani, Yusuke; McCarley, Jason S.
2010-01-01
Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…
ERIC Educational Resources Information Center
Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.
2013-01-01
We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…
Usage of stereoscopic visualization in the learning contents of rotational motion.
Matsuura, Shu
2013-01-01
Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.
Multiple-Flat-Panel System Displays Multidimensional Data
NASA Technical Reports Server (NTRS)
Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur
2006-01-01
The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.
Visual field information in Nap-of-the-Earth flight by teleoperated Helmet-Mounted displays
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, S.; Merhav, S. J.
1991-01-01
The human ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays originates from a Forward Looking Infrared Radiation Camera, gimbal-mounted at the front of the aircraft and slaved to the pilot's line-of-sight, to obtain wide-angle visual coverage. Although these displays are proved to be effective in Apache and Cobra helicopter night operations, they demand very high pilot proficiency and work load. Experimental work presented in the paper has shown that part of the difficulties encountered in vehicular control by means of these displays can be attributed to the narrow viewing aperture and head/camera slaving system phase lags. Both these shortcomings will impair visuo-vestibular coordination, when voluntary head rotation is present. This might result in errors in estimating the Control-Oriented Visual Field Information vital in vehicular control, such as the vehicle yaw rate or the anticipated flight path, or might even lead to visuo-vestibular conflicts (motion sickness). Since, under these conditions, the pilot will tend to minimize head rotation, the full wide-angle coverage of the Helmet-Mounted Display, provided by the line-of-sight slaving system, is not always fully utilized.
Maxillary anterior papilla display during smiling: a clinical study of the interdental smile line.
Hochman, Mark N; Chu, Stephen J; Tarnow, Dennis P
2012-08-01
The purpose of this research was to quantify the visual display (presence) or lack of display (absence) of interdental papillae during maximum smiling in a patient population aged 10 to 89 years. Four hundred twenty digital single-lens reflex photographs of patients were taken and examined for the visual display of interdental papillae between the maxillary anterior teeth during maximum smiling. Three digital photographs were taken per patient from the frontal, right frontal-lateral, and left frontal-lateral views. The data set of photographs was examined by two examiners for the presence or absence of the visual display of papillae. The visual display of interdental papillae during maximum smiling occurred in 380 of the 420 patients examined in this study, equivalent to a 91% occurrence rate. Eighty-seven percent of all patients categorized as having a low gingival smile line (n = 303) were found to display the interdental papillae upon smiling. Differences were noted for individual age groups according to the decade of life as well as a trend toward decreasing papillary display with increasing age. The importance of interdental papillae display during dynamic smiling should not be left undiagnosed since it is visible in over 91% of older patients and in 87% of patients with a low gingival smile line, representing a common and important esthetic element that needs to be assessed during smile analysis of the patient.
Assessment of OLED displays for vision research.
Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M
2013-10-23
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
Gerber, Stephan M; Jeitziner, Marie-Madlen; Wyss, Patric; Chesham, Alvin; Urwyler, Prabitha; Müri, René M; Jakob, Stephan M; Nef, Tobias
2017-10-16
After prolonged stay in an intensive care unit (ICU) patients often complain about cognitive impairments that affect health-related quality of life after discharge. The aim of this proof-of-concept study was to test the feasibility and effects of controlled visual and acoustic stimulation in a virtual reality (VR) setup in the ICU. The VR setup consisted of a head-mounted display in combination with an eye tracker and sensors to assess vital signs. The stimulation consisted of videos featuring natural scenes and was tested in 37 healthy participants in the ICU. The VR stimulation led to a reduction of heart rate (p = 0. 049) and blood pressure (p = 0.044). Fixation/saccade ratio (p < 0.001) was increased when a visual target was presented superimposed on the videos (reduced search activity), reflecting enhanced visual processing. Overall, the VR stimulation had a relaxing effect as shown in vital markers of physical stress and participants explored less when attending the target. Our study indicates that VR stimulation in ICU settings is feasible and beneficial for critically ill patients.
NASA Astrophysics Data System (ADS)
Schlam, E.
1983-01-01
Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.
A lattice model for data display
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
In order to develop a foundation for visualization, we develop lattice models for data objects and displays that focus on the fact that data objects are approximations to mathematical objects and real displays are approximations to ideal displays. These lattice models give us a way to quantize the information content of data and displays and to define conditions on the visualization mappings from data to displays. Mappings satisfy these conditions if and only if they are lattice isomorphisms. We show how to apply this result to scientific data and display models, and discuss how it might be applied to recursively defined data types appropriate for complex information processing.
NASA Astrophysics Data System (ADS)
Hopper, Darrel G.
2000-08-01
Displays were invented just in the last century. The human visual system evolved over millions of years. The disparity between the natural world 'display' and that 'sampled' by year 2000 technology is more than a factor of one million. Over 1000X of this disparity between the fidelity of current electronic displays and human visual capacity is in 2D resolution alone. Then there is true 3D, which adds an additional factor of over 1000X. The present paper focuses just on the 2D portion of this grand technology challenge. Should a significant portion of this gap be closed, say just 10X by 2010, display technology can help drive a revolution in military affairs. Warfighter productivity must grow dramatically, and improved display technology systems can create a critical opportunity to increase defense capability while decreasing crew sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradel, Lauren; Endert, Alexander; Koch, Kristen
2013-08-01
Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional textual intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the space management strategies of users partitioned by type of tool philosophy followed (visualization- or text-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with information on the display (integrated or independent workspaces). Next,more » we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we offer design suggestions for building future co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays.« less
The New Visual Displays That Are "Floating" Your Way. Building Digital Libraries
ERIC Educational Resources Information Center
Huwe, Terence K.
2005-01-01
In this column, the author describes three very experimental visual display technologies that will affect library collections and services in the near future. While each of these new display strategies is unique in its technological approach, there is a common denominator to all three: better freedom of mobility that will allow people to interact…
ERIC Educational Resources Information Center
Hegarty, Mary; Canham, Matt S.; Fabrikant, Sara I.
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of…
NASA Technical Reports Server (NTRS)
Randle, R. J.; Roscoe, S. N.; Petitt, J. C.
1980-01-01
Twenty professional pilots observed a computer-generated airport scene during simulated autopilot-coupled night landing approaches and at two points (20 sec and 10 sec before touchdown) judged whether the airplane would undershoot or overshoot the aimpoint. Visual accommodation was continuously measured using an automatic infrared optometer. Experimental variables included approach slope angle, display magnification, visual focus demand (using ophthalmic lenses), and presentation of the display as either a real (direct view) or a virtual (collimated) image. Aimpoint judgments shifted predictably with actual approach slope and display magnification. Both pilot judgments and measured accommodation interacted with focus demand with real-image displays but not with virtual-image displays. With either type of display, measured accommodation lagged far behind focus demand and was reliably less responsive to the virtual images. Pilot judgments shifted dramatically from an overwhelming perceived-overshoot bias 20 sec before touchdown to a reliable undershoot bias 10 sec later.
Pankok, Carl; Kaber, David B
2018-05-01
Existing measures of display clutter in the literature generally exhibit weak correlations with task performance, which limits their utility in safety-critical domains. A literature review led to formulation of an integrated display data- and user knowledge-driven measure of display clutter. A driving simulation experiment was conducted in which participants were asked to search 'high' and 'low' clutter displays for navigation information. Data-driven measures and subjective perceptions of clutter were collected along with patterns of visual attention allocation and driving performance responses during time periods in which participants searched the navigation display for information. The new integrated measure was more strongly correlated with driving performance than other, previously developed measures of clutter, particularly in the case of low-clutter displays. Integrating display data and user knowledge factors with patterns of visual attention allocation shows promise for measuring display clutter and correlation with task performance, particularly for low-clutter displays. Practitioner Summary: A novel measure of display clutter was formulated, accounting for display data content, user knowledge states and patterns of visual attention allocation. The measure was evaluated in terms of correlations with driver performance in a safety-critical driving simulation study. The measure exhibited stronger correlations with task performance than previously defined measures.
Perceptual response to visual noise and display media
NASA Technical Reports Server (NTRS)
Durgin, Frank H.; Proffitt, Dennis R.
1993-01-01
The present project was designed to follow up an earlier investigation in which perceptual adaptation in response to the use of Night Vision Goggles, or image intensification (I squared) systems, such as those employed in the military were studied. Our chief concern in the earlier studies was with the dynamic visual noise that is a byproduct of the I(sup 2) technology: under low light conditions, there is a great deal of 'snow' or sporadic 'twinkling' of pixels in the I(sup 2) display which is more salient as the ambient light levels are lower. Because prolonged exposure to static visual noise produces strong adaptation responses, we reasoned that the dynamic visual noise of I(sup 2) displays might have a similar effect, which could have implications for their long term use. However, in the series of experiments reported last year, no evidence at all of such aftereffects following extended exposure to I(sup 2) displays were found. This finding surprised us, and led us to propose the following studies: (1) an investigation of dynamic visual noise and its capacity to produce after effects; and (2) an investigation of the perceptual consequences of characteristics of the display media.
Rapid pupil-based assessment of glaucomatous damage.
Chen, Yanjun; Wyatt, Harry J; Swanson, William H; Dul, Mitchell W
2008-06-01
To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the "contrast balance" (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes.
Rapid Pupil-Based Assessment of Glaucomatous Damage
Chen, Yanjun; Wyatt, Harry J.; Swanson, William H.; Dul, Mitchell W.
2010-01-01
Purpose To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Methods Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the “contrast balance” (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Conclusions Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes. PMID:18521026
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron
2014-03-01
Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of "their" insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects' visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool.
Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya
2015-01-01
Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of “their” insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects’ visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool. PMID:26240534
When the Wheels Touch Earth and the Flight is Through, Pilots Find One Eye is Better Than Two?
NASA Technical Reports Server (NTRS)
Valimont, Brian; Wise, John A.; Nichols, Troy; Best, Carl; Suddreth, John; Cupero, Frank
2009-01-01
This study investigated the impact of near to eye displays on both operational and visual performance by employing a human-in-the-loop simulation of straight-in ILS approaches while using a near to eye (NTE) display. The approaches were flown in simulated visual and instrument conditions while using either a biocular NTE or a monocular NTE display on either the dominant or non dominant eye. The pilot s flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested.
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Using high-resolution displays for high-resolution cardiac data.
Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken
2009-07-13
The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.
NASA Technical Reports Server (NTRS)
Hall, William A.; Gilbert, John
1990-01-01
Electronic metronome paces users through wide range of exercise routines. Conceptual programmable cadence timer provides rhythmic aural and visual cues. Timer automatically changes cadence according to program entered by the user. It also functions as clock, stopwatch, or alarm. Modular pacer operated as single unit or as two units. With audiovisual module moved away from base module, user concentrates on exercise cues without distraction from information appearing on the liquid-crystal display. Variety of uses in rehabilitative medicine, experimental medicine, sports, and gymnastics. Used in intermittent positive-pressure breathing treatment, in which patient must rhythmically inhale and retain medication delivered under positive pressure; and in incentive spirometer treatment, in which patient must inhale maximally at regular intervals.
Helmet-mounted display systems for flight simulation
NASA Technical Reports Server (NTRS)
Haworth, Loren A.; Bucher, Nancy M.
1989-01-01
Simulation scientists are continually improving simulation technology with the goal of more closely replicating the physical environment of the real world. The presentation or display of visual information is one area in which recent technical improvements have been made that are fundamental to conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for nap-of-the-earth helicopter flight simulation where the pilot maintains an 'eyes-out' orientation to avoid obstructions and terrain. This paper describes visually coupled wide field of view helmet-mounted display (WFOVHMD) system technology as a viable visual presentation system for helicopter simulation. Tradeoffs associated with this mode of presentation as well as research and training applications are discussed.
Interactive displays in medical art
NASA Technical Reports Server (NTRS)
Mcconathy, Deirdre Alla; Doyle, Michael
1989-01-01
Medical illustration is a field of visual communication with a long history. Traditional medical illustrations are static, 2-D, printed images; highly realistic depictions of the gross morphology of anatomical structures. Today medicine requires the visualization of structures and processes that have never before been seen. Complex 3-D spatial relationships require interpretation from 2-D diagnostic imagery. Pictures that move in real time have become clinical and research tools for physicians. Medical illustrators are involved with the development of interactive visual displays for three different, but not discrete, functions: as educational materials, as clinical and research tools, and as data bases of standard imagery used to produce visuals. The production of interactive displays in the medical arts is examined.
SEEING IS BELIEVING, AND BELIEVING IS SEEING
NASA Astrophysics Data System (ADS)
Dutrow, B. L.
2009-12-01
Geoscience disciplines are filled with visual displays of data. From the first cave drawings to remote imaging of our Planet, visual displays of information have been used to understand and interpret our discipline. As practitioners of the art, visuals comprise the core around which we write scholarly articles, teach our students and make every day decisions. The effectiveness of visual communication, however, varies greatly. For many visual displays, a significant amount of prior knowledge is needed to understand and interpret various representations. If this is missing, key components of communication fail. One common example is the use of animations to explain high density and typically complex data. Do animations effectively convey information, simply "wow an audience" or do they confuse the subject by using unfamiliar forms and representations? Prior knowledge impacts the information derived from visuals and when communicating with non-experts this factor is exacerbated. For example, in an advanced geology course fractures in a rock are viewed by petroleum engineers as conduits for fluid migration while geoscience students 'see' the minerals lining the fracture. In contrast, a lay audience might view these images as abstract art. Without specific and direct accompanying verbal or written communication such an image is viewed radically differently by disparate audiences. Experts and non-experts do not 'see' equivalent images. Each visual must be carefully constructed with it's communication task in mind. To enhance learning and communication at all levels by visual displays of data requires that we teach visual literacy as a portion of our curricula. As we move from one form of visual representation to another, our mental images are expanded as is our ability to see and interpret new visual forms thus promoting life-long learning. Visual literacy is key to communication in our visually rich discipline. What do you see?
Human Factors Engineering Program Review Model
2004-02-01
Institute, 1993). ANSI HFS-100: American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (American National... American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI HFS-100-1988). Santa Monica, California
Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.
Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi
2016-05-30
Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.
JPL Earth Science Center Visualization Multitouch Table
NASA Astrophysics Data System (ADS)
Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.
2014-12-01
JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.
Automated objective characterization of visual field defects in 3D
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor)
2006-01-01
A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.
Usability Evaluation of a Flight-Deck Airflow Hazard Visualization System
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.
2004-01-01
Many aircraft accidents each year are caused by encounters with unseen airflow hazards near the ground, such as vortices, downdrafts, low level wind shear, microbursts, or turbulence from surrounding vegetation or structures near the landing site. These hazards can be dangerous even to airliners; there have been hundreds of fatalities in the United States in the last two decades attributable to airliner encounters with microbursts and low level wind shear alone. However, helicopters are especially vulnerable to airflow hazards because they often have to operate in confined spaces and under operationally stressful conditions (such as emergency search and rescue, military or shipboard operations). Providing helicopter pilots with an augmented-reality display visualizing local airflow hazards may be of significant benefit. However, the form such a visualization might take, and whether it does indeed provide a benefit, had not been studied before our experiment. We recruited experienced military and civilian helicopter pilots for a preliminary usability study to evaluate a prototype augmented-reality visualization system. The study had two goals: first, to assess the efficacy of presenting airflow data in flight; and second, to obtain expert feedback on sample presentations of hazard indicators to refine our design choices. The study addressed the optimal way to provide critical safety information to the pilot, what level of detail to provide, whether to display specific aerodynamic causes or potential effects only, and how to safely and effectively shift the locus of attention during a high-workload task. Three-dimensional visual cues, with varying shape, color, transparency, texture, depth cueing, and use of motion, depicting regions of hazardous airflow, were developed and presented to the pilots. The study results indicated that such a visualization system could be of significant value in improving safety during critical takeoff and landing operations, and also gave clear indications of the best design choices in producing the hazard visual cues.
Assessment of OLED displays for vision research
Cooper, Emily A.; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E.; Norcia, Anthony M.
2013-01-01
Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function (“gamma correction”). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications. PMID:24155345
Hybrid foraging search: Searching for multiple instances of multiple types of target.
Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S
2016-02-01
This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hybrid foraging search: Searching for multiple instances of multiple types of target
Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.
2016-01-01
This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644
Visual to Parametric Interaction (V2PI)
Maiti, Dipayan; Endert, Alex; North, Chris
2013-01-01
Typical data visualizations result from linear pipelines that start by characterizing data using a model or algorithm to reduce the dimension and summarize structure, and end by displaying the data in a reduced dimensional form. Sensemaking may take place at the end of the pipeline when users have an opportunity to observe, digest, and internalize any information displayed. However, some visualizations mask meaningful data structures when model or algorithm constraints (e.g., parameter specifications) contradict information in the data. Yet, due to the linearity of the pipeline, users do not have a natural means to adjust the displays. In this paper, we present a framework for creating dynamic data displays that rely on both mechanistic data summaries and expert judgement. The key is that we develop both the theory and methods of a new human-data interaction to which we refer as “ Visual to Parametric Interaction” (V2PI). With V2PI, the pipeline becomes bi-directional in that users are embedded in the pipeline; users learn from visualizations and the visualizations adjust to expert judgement. We demonstrate the utility of V2PI and a bi-directional pipeline with two examples. PMID:23555552
Analysis and Selection of a Remote Docking Simulation Visual Display System
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Fagg, M. F.
1984-01-01
The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.
14 CFR 15.101 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES... (b) Aeronautical data that— (1) Is visually displayed in the cockpit of an aircraft; and (2) When visually displayed, accurately depicts a defective or deficient flight procedure or airway promulgated by...
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
Katz, Trixie A; Weinberg, Danielle D; Fishman, Claire E; Nadkarni, Vinay; Tremoulet, Patrice; Te Pas, Arjan B; Sarcevic, Aleksandra; Foglia, Elizabeth E
2018-06-14
A respiratory function monitor (RFM) may improve positive pressure ventilation (PPV) technique, but many providers do not use RFM data appropriately during delivery room resuscitation. We sought to use eye-tracking technology to identify RFM parameters that neonatal providers view most commonly during simulated PPV. Mixed methods study. Neonatal providers performed RFM-guided PPV on a neonatal manikin while wearing eye-tracking glasses to quantify visual attention on displayed RFM parameters (ie, exhaled tidal volume, flow, leak). Participants subsequently provided qualitative feedback on the eye-tracking glasses. Level 3 academic neonatal intensive care unit. Twenty neonatal resuscitation providers. Visual attention: overall gaze sample percentage; total gaze duration, visit count and average visit duration for each displayed RFM parameter. Qualitative feedback: willingness to wear eye-tracking glasses during clinical resuscitation. Twenty providers participated in this study. The mean gaze sample captured wa s 93% (SD 4%). Exhaled tidal volume waveform was the RFM parameter with the highest total gaze duration (median 23%, IQR 13-51%), highest visit count (median 5.17 per 10 s, IQR 2.82-6.16) and longest visit duration (median 0.48 s, IQR 0.38-0.81 s). All participants were willing to wear the glasses during clinical resuscitation. Wearable eye-tracking technology is feasible to identify gaze fixation on the RFM display and is well accepted by providers. Neonatal providers look at exhaled tidal volume more than any other RFM parameter. Future applications of eye-tracking technology include use during clinical resuscitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Kaiser, Mary K.
2003-01-01
Perspective synthetic displays that supplement, or supplant, the optical windows traditionally used for guidance and control of aircraft are accompanied by potentially significant human factors problems related to the optical geometric conformality of the display. Such geometric conformality is broken when optical features are not in the location they would be if directly viewed through a window. This often occurs when the scene is relayed or generated from a location different from the pilot s eyepoint. However, assuming no large visual/vestibular effects, a pilot cad often learn to use such a display very effectively. Important problems may arise, however, when display accuracy or consistency is compromised, and this can usually be related to geometrical discrepancies between how the synthetic visual scene behaves and how the visual scene through a window behaves. In addition to these issues, this paper examines the potentially critical problem of the disorientation that can arise when both a synthetic display and a real window are present in a flight deck, and no consistent visual interpretation is available.
Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert
2012-01-01
This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.
Ota, Nao; Gahr, Manfred; Soma, Masayo
2015-11-19
According to classical sexual selection theory, complex multimodal courtship displays have evolved in males through female choice. While it is well-known that socially monogamous songbird males sing to attract females, we report here the first example of a multimodal dance display that is not a uniquely male trait in these birds. In the blue-capped cordon-bleu (Uraeginthus cyanocephalus), a socially monogamous songbird, both sexes perform courtship displays that are characterised by singing and simultaneous visual displays. By recording these displays with a high-speed video camera, we discovered that in addition to bobbing, their visual courtship display includes quite rapid step-dancing, which is assumed to produce vibrations and/or presumably non-vocal sounds. Dance performances did not differ between sexes but varied among individuals. Both male and female cordon-bleus intensified their dance performances when their mate was on the same perch. The multimodal (acoustic, visual, tactile) and multicomponent (vocal and non-vocal sounds) courtship display observed was a combination of several motor behaviours (singing, bobbing, stepping). The fact that both sexes of this socially monogamous songbird perform such a complex courtship display is a novel finding and suggests that the evolution of multimodal courtship display as an intersexual communication should be considered.
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-04-14
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.
Visual search performance among persons with schizophrenia as a function of target eccentricity.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2010-03-01
The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved
Validating Visual Cues In Flight Simulator Visual Displays
NASA Astrophysics Data System (ADS)
Aronson, Moses
1987-09-01
Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.
Object-based warping: an illusory distortion of space within objects.
Vickery, Timothy J; Chun, Marvin M
2010-12-01
Visual objects are high-level primitives that are fundamental to numerous perceptual functions, such as guidance of attention. We report that objects warp visual perception of space in such a way that spatial distances within objects appear to be larger than spatial distances in ground regions. When two dots were placed inside a rectangular object, they appeared farther apart from one another than two dots with identical spacing outside of the object. To investigate whether this effect was object based, we measured the distortion while manipulating the structure surrounding the dots. Object displays were constructed with a single object, multiple objects, a partially occluded object, and an illusory object. Nonobject displays were constructed to be comparable to object displays in low-level visual attributes. In all cases, the object displays resulted in a more powerful distortion of spatial perception than comparable non-object-based displays. These results suggest that perception of space within objects is warped.
Multifocal planes head-mounted displays.
Rolland, J P; Krueger, M W; Goon, A
2000-07-01
Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
A Cu2+-selective fluorescent chemosensor based on BODIPY with two pyridine ligands and logic gate
NASA Astrophysics Data System (ADS)
Huang, Liuqian; Zhang, Jing; Yu, Xiaoxiu; Ma, Yifan; Huang, Tianjiao; Shen, Xi; Qiu, Huayu; He, Xingxing; Yin, Shouchun
2015-06-01
A novel near-infrared fluorescent chemosensor based on BODIPY (Py-1) has been synthesized and characterized. Py-1 displays high selectivity and sensitivity for sensing Cu2+ over other metal ions in acetonitrile. Upon addition of Cu2+ ions, the maximum absorption band of Py-1 in CH3CN displays a red shift from 603 to 608 nm, which results in a visual color change from pink to blue. When Py-1 is excited at 600 nm in the presence of Cu2+, the fluorescent emission intensity of Py-1 at 617 nm is quenched over 86%. Notably, the complex of Py-1-Cu2+ can be restored with the introduction of EDTA or S2-. Consequently, an IMPLICATION logic gate at molecular level operating in fluorescence mode with Cu2+ and S2- as chemical inputs can be constructed. Finally, based on the reversible and reproducible system, a nanoscale sequential memory unit displaying "Writing-Reading-Erasing-Reading" functions can be integrated.
Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance
NASA Astrophysics Data System (ADS)
Speck, Richard P.; Herz, Norman E., Jr.
2000-06-01
Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.
Wiyor, Hanniebey D.; Ntuen, Celestine A.
2013-01-01
The purpose of this study was to investigate the effect of stereoscopic display alignment errors on visual fatigue and prefrontal cortical tissue hemodynamic responses. We collected hemodynamic data and perceptual ratings of visual fatigue while participants performed visual display tasks on 8 ft × 6 ft NEC LT silver screen with NEC LT 245 DLP projectors. There was statistical significant difference between subjective measures of visual fatigue before air traffic control task (BATC) and after air traffic control task (ATC 3), (P < 0.05). Statistical significance was observed between left dorsolateral prefrontal cortex oxygenated hemoglobin (l DLPFC-HbO2), left dorsolateral prefrontal cortex deoxygenated hemoglobin (l DLPFC-Hbb), and right dorsolateral prefrontal cortex deoxygenated hemoglobin (r DLPFC-Hbb) on stereoscopic alignment errors (P < 0.05). Thus, cortical tissue oxygenation requirement in the left hemisphere indicates that the effect of visual fatigue is more pronounced in the left dorsolateral prefrontal cortex. PMID:27006917
High-chroma visual cryptography using interference color of high-order retarder films
NASA Astrophysics Data System (ADS)
Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke
2015-08-01
Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.
Organic light emitting board for dynamic interactive display
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-01-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151
Evaluation of a visual layering methodology for colour coding control room displays.
Van Laar, Darren; Deshe, Ofer
2002-07-01
Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.
Organic light emitting board for dynamic interactive display
NASA Astrophysics Data System (ADS)
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-04-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.
Reconfigurable Auditory-Visual Display
NASA Technical Reports Server (NTRS)
Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)
2008-01-01
System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.
Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David
2018-02-01
To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ERIC Educational Resources Information Center
Hanley, Mary; Khairat, Mariam; Taylor, Korey; Wilson, Rachel; Cole-Fletcher, Rachel; Riby, Deborah M.
2017-01-01
Paying attention is a critical first step toward learning. For children in primary school classrooms there can be many things to attend to other than the focus of a lesson, such as visual displays on classroom walls. The aim of this study was to use eye-tracking techniques to explore the impact of visual displays on attention and learning for…
Geyer, Thomas; Baumgartner, Florian; Müller, Hermann J.; Pollmann, Stefan
2012-01-01
Using visual search, functional magnetic resonance imaging (fMRI) and patient studies have demonstrated that medial temporal lobe (MTL) structures differentiate repeated from novel displays—even when observers are unaware of display repetitions. This suggests a role for MTL in both explicit and, importantly, implicit learning of repeated sensory information (Greene et al., 2007). However, recent behavioral studies suggest, by examining visual search and recognition performance concurrently, that observers have explicit knowledge of at least some of the repeated displays (Geyer et al., 2010). The aim of the present fMRI study was thus to contribute new evidence regarding the contribution of MTL structures to explicit vs. implicit learning in visual search. It was found that MTL activation was increased for explicit and, respectively, decreased for implicit relative to baseline displays. These activation differences were most pronounced in left anterior parahippocampal cortex (aPHC), especially when observers were highly trained on the repeated displays. The data are taken to suggest that explicit and implicit memory processes are linked within MTL structures, but expressed via functionally separable mechanisms (repetition-enhancement vs. -suppression). They further show that repetition effects in visual search would have to be investigated at the display level. PMID:23060776
Evolutionary adaptations: theoretical and practical implications for visual ergonomics.
Fostervold, Knut Inge; Watten, Reidulf G; Volden, Frode
2014-01-01
The literature discussing visual ergonomics often mention that human vision is adapted to light emitted by the sun. However, theoretical and practical implications of this viewpoint is seldom discussed or taken into account. The paper discusses some of the main theoretical implications of an evolutionary approach to visual ergonomics. Based on interactional theory and ideas from ecological psychology an evolutionary stress model is proposed as a theoretical framework for future research in ergonomics and human factors. The model stresses the importance of developing work environments that fits with our evolutionary adaptations. In accordance with evolutionary psychology, the environment of evolutionary adaptedness (EEA) and evolutionarily-novel environments (EN) are used as key concepts. Using work with visual display units (VDU) as an example, the paper discusses how this knowledge can be utilized in an ergonomic analysis of risk factors in the work environment. The paper emphasises the importance of incorporating evolutionary theory in the field of ergonomics. Further, the paper encourages scientific practices that further our understanding of any phenomena beyond the borders of traditional proximal explanations.
Faiola, Anthony; Srinivas, Preethi; Duke, Jon
2015-01-01
Advances in intensive care unit bedside displays/interfaces and electronic medical record (EMR) technology have not adequately addressed the topic of visual clarity of patient data/information to further reduce cognitive load during clinical decision-making. We responded to these challenges with a human-centered approach to designing and testing a decision-support tool: MIVA 2.0 (Medical Information Visualization Assistant, v.2). Envisioned as an EMR visualization dashboard to support rapid analysis of real-time clinical data-trends, our primary goal originated from a clinical requirement to reduce cognitive overload. In the study, a convenience sample of 12 participants were recruited, in which quantitative and qualitative measures were used to compare MIVA 2.0 with ICU paper medical-charts, using time-on-task, post-test questionnaires, and interviews. Findings demonstrated a significant difference in speed and accuracy with the use of MIVA 2.0. Qualitative outcomes concurred, with participants acknowledging the potential impact of MIVA 2.0 for reducing cognitive load and enabling more accurate and quicker decision-making.
Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions
Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang
2012-01-01
Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
Visual Temporal Filtering and Intermittent Visual Displays.
1986-08-08
suport Mud Kaplan, Associate Professor, 20% time and effort Michelangelo ROssetto, Research Associate, 20% time and m4pport Margo Greene, Research...reached and are described as follows. The variable raster rate display was designed and built by Michelangelo R0ssetto and Norman Milkman, Research
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Bendrick, Gregg A.; Kamine, Tovy Haber
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. "cones") of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement" (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Methods: Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. Results: The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of "Maximum Eye Movement". However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of "Easy Eye Movement", though all were within the cone of "Maximum Eye Movement". All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Discussion: Most instrument displays in conventional aircraft lay within the cone of "Easy Eye Movement", though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight.
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
[Spatial domain display for interference image dataset].
Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia
2011-11-01
The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Measuring visual discomfort associated with 3D displays
NASA Astrophysics Data System (ADS)
Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.
2009-02-01
Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.
Hiding and finding: the relationship between visual concealment and visual search.
Smilek, Daniel; Weinheimer, Laura; Kwan, Donna; Reynolds, Mike; Kingstone, Alan
2009-11-01
As an initial step toward developing a theory of visual concealment, we assessed whether people would use factors known to influence visual search difficulty when the degree of concealment of objects among distractors was varied. In Experiment 1, participants arranged search objects (shapes, emotional faces, and graphemes) to create displays in which the targets were in plain sight but were either easy or hard to find. Analyses of easy and hard displays created during Experiment 1 revealed that the participants reliably used factors known to influence search difficulty (e.g., eccentricity, target-distractor similarity, presence/absence of a feature) to vary the difficulty of search across displays. In Experiment 2, a new participant group searched for the targets in the displays created by the participants in Experiment 1. Results indicated that search was more difficult in the hard than in the easy condition. In Experiments 3 and 4, participants used presence versus absence of a feature to vary search difficulty with several novel stimulus sets. Taken together, the results reveal a close link between the factors that govern concealment and the factors known to influence search difficulty, suggesting that a visual search theory can be extended to form the basis of a theory of visual concealment.
Exploratory visualization of astronomical data on ultra-high-resolution wall displays
NASA Astrophysics Data System (ADS)
Pietriga, Emmanuel; del Campo, Fernando; Ibsen, Amanda; Primet, Romain; Appert, Caroline; Chapuis, Olivier; Hempel, Maren; Muñoz, Roberto; Eyheramendy, Susana; Jordan, Andres; Dole, Hervé
2016-07-01
Ultra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touchsensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays.
NASA Astrophysics Data System (ADS)
Kay, Paul A.; Robb, Richard A.; King, Bernard F.; Myers, R. P.; Camp, Jon J.
1995-04-01
Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
Emotional display rules as work unit norms: a multilevel analysis of emotional labor among nurses.
Diefendorff, James M; Erickson, Rebecca J; Grandey, Alicia A; Dahling, Jason J
2011-04-01
Emotional labor theory has conceptualized emotional display rules as shared norms governing the expression of emotions at work. Using a sample of registered nurses working in different units of a hospital system, we provided the first empirical evidence that display rules can be represented as shared, unit-level beliefs. Additionally, controlling for the influence of dispositional affectivity, individual-level display rule perceptions, and emotion regulation, we found that unit-level display rules are associated with individual-level job satisfaction. We also showed that unit-level display rules relate to burnout indirectly through individual-level display rule perceptions and emotion regulation strategies. Finally, unit-level display rules also interacted with individual-level dispositional affectivity to predict employee use of emotion regulation strategies. We discuss how future research on emotional labor and display rules, particularly in the health care setting, can build on these findings.
Evaluation of force-torque displays for use with space station telerobotic activities
NASA Technical Reports Server (NTRS)
Hendrich, Robert C.; Bierschwale, John M.; Manahan, Meera K.; Stuart, Mark A.; Legendre, A. Jay
1992-01-01
Recent experiments which addressed Space Station remote manipulation tasks found that tactile force feedback (reflecting forces and torques encountered at the end-effector through the manipulator hand controller) does not improve performance significantly. Subjective response from astronaut and non-astronaut test subjects indicated that force information, provided visually, could be useful. No research exists which specifically investigates methods of presenting force-torque information visually. This experiment was designed to evaluate seven different visual force-torque displays which were found in an informal telephone survey. The displays were prototyped in the HyperCard programming environment. In a within-subjects experiment, 14 subjects nullified forces and torques presented statically, using response buttons located at the bottom of the screen. Dependent measures included questionnaire data, errors, and response time. Subjective data generally demonstrate that subjects rated variations of pseudo-perspective displays consistently better than bar graph and digital displays. Subjects commented that the bar graph and digital displays could be used, but were not compatible with using hand controllers. Quantitative data show similar trends to the subjective data, except that the bar graph and digital displays both provided good performance, perhaps do to the mapping of response buttons to display elements. Results indicate that for this set of displays, the pseudo-perspective displays generally represent a more intuitive format for presenting force-torque information.
Three-dimensional (3D) GIS-based coastline change analysis and display using LIDAR series data
NASA Astrophysics Data System (ADS)
Zhou, G.
This paper presents a method to visualize and analyze topography and topographic changes on coastline area. The study area, Assantage Island Nation Seashore (AINS), is located along a 37-mile stretch of Assateague Island National Seashore in Eastern Shore, VA. The DEMS data sets from 1996 through 2000 for various time intervals, e.g., year-to-year, season-to-season, date-to-date, and a four year (1996-2000) are created. The spatial patterns and volumetric amounts of erosion and deposition of each part on a cell-by-cell basis were calculated. A 3D dynamic display system using ArcView Avenue for visualizing dynamic coastal landforms has been developed. The system was developed into five functional modules: Dynamic Display, Analysis, Chart analysis, Output, and Help. The Display module includes five types of displays: Shoreline display, Shore Topographic Profile, Shore Erosion Display, Surface TIN Display, and 3D Scene Display. Visualized data include rectified and co-registered multispectral Landsat digital image and NOAA/NASA ATM LIDAR data. The system is demonstrated using multitemporal digital satellite and LIDAR data for displaying changes on the Assateague Island National Seashore, Virginia. The analyzed results demonstrated that a further understanding to the study and comparison of the complex morphological changes that occur naturally or human-induced on barrier islands is required.
Illusion in reality: visual perception in displays
NASA Astrophysics Data System (ADS)
Kaufman, Lloyd; Kaufman, James H.
2001-06-01
Research into visual perception ultimately affects display design. Advance in display technology affects, in turn, our study of perception. Although this statement is too general to provide controversy, this paper present a real-life example that may prompt display engineers to make greater use of basic knowledge of visual perception, and encourage those who study perception to track more closely leading edge display technology. Our real-life example deals with an ancient problem, the moon illusion: why does the horizon moon appear so large while the elevated moon look so small. This was a puzzle for many centuries. Physical explanations, such as refraction by the atmosphere, are incorrect. The difference in apparent size may be classified as a misperception, so the answer must lie in the general principles of visual perception. The factors underlying the moon illusion must be the same factors as those that enable us to perceive the sizes of ordinary objects in visual space. Progress toward solving the problem has been irregular, since methods for actually measuring the illusion under a wide range of conditions were lacking. An advance in display technology made possible a serious and methodologically controlled study of the illusion. This technology was the first heads-up display. In this paper we will describe how the heads-up display concept made it possible to test several competing theories of the moon illusion, and how it led to an explanation that stood for nearly 40 years. We also consider the criticisms of that explanation and how the optics of the heads-up display also played a role in providing data for the critics. Finally, we will describe our own advance on the original methodology. This advance was motivated by previously unrelated principles of space perception. We used a stereoscopic heads up display to test alternative hypothesis about the illusion and to discrimate between two classes of mutually contradictory theories. At its core, the explanation for the moon illusion has implications for the design of virtual reality displays. Howe do we scale disparity at great distances to reflect depth between points at those distances. We conjecture that one yardstick involved in that scaling is provided by oculomotor cues operating at near distances. Without the presence of such a yardstick it is not possible to account for depth at long distances. As we shall explain, size and depth constancy should both fail in virtual reality display where all of the visual information is optically in one plane. We suggest ways to study this problem, and also means by which displays may be designed to present information at different optical distances.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Embedded Data Representations.
Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre
2017-01-01
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.
SimGraph: A Flight Simulation Data Visualization Workstation
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Kenney, Patrick S.
1997-01-01
Today's modern flight simulation research produces vast amounts of time sensitive data, making a qualitative analysis of the data difficult while it remains in a numerical representation. Therefore, a method of merging related data together and presenting it to the user in a more comprehensible format is necessary. Simulation Graphics (SimGraph) is an object-oriented data visualization software package that presents simulation data in animated graphical displays for easy interpretation. Data produced from a flight simulation is presented by SimGraph in several different formats, including: 3-Dimensional Views, Cockpit Control Views, Heads-Up Displays, Strip Charts, and Status Indicators. SimGraph can accommodate the addition of new graphical displays to allow the software to be customized to each user s particular environment. A new display can be developed and added to SimGraph without having to design a new application, allowing the graphics programmer to focus on the development of the graphical display. The SimGraph framework can be reused for a wide variety of visualization tasks. Although it was created for the flight simulation facilities at NASA Langley Research Center, SimGraph can be reconfigured to almost any data visualization environment. This paper describes the capabilities and operations of SimGraph.
Final Report: Computer-aided Human Centric Cyber Situation Awareness
2016-03-20
logs, OS audit trails, vulnerability reports, and packet dumps ), weeding out the false positives, grouping the related indicators so that different...short time duration of each visual stimulus in an fMRI study, we have designed “network security analysis cards ” that require the subject to...determine whether alerts in the cards indicate malicious events. Two types of visual displays of alerts (i.e., tabular display and node-link display) are
Perceptual issues in scientific visualization
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Proffitt, Dennis R.
1989-01-01
In order to develop effective tools for scientific visulaization, consideration must be given to the perceptual competencies, limitations, and biases of the human operator. Perceptual psychology has amassed a rich body of research on these issues and can lend insight to the development of visualization tehcniques. Within a perceptual psychological framework, the computer display screen can best be thought of as a special kind of impoverished visual environemnt. Guidelines can be gleaned from the psychological literature to help visualization tool designers avoid ambiguities and/or illusions in the resulting data displays.
NASA Astrophysics Data System (ADS)
Sudiartha, IKG; Catur Bawa, IGNB
2018-01-01
Information can not be separated from the social life of the community, especially in the world of education. One of the information fields is academic calendar information, activity agenda, announcement and campus activity news. In line with technological developments, text-based information is becoming obsolete. For that need creativity to present information more quickly, accurately and interesting by exploiting the development of digital technology and internet. In this paper will be developed applications for the provision of information in the form of visual display, applied to computer network system with multimedia applications. Network-based applications provide ease in updating data through internet services, attractive presentations with multimedia support. The application “Networking Visual Display Information Unit” can be used as a medium that provides information services for students and academic employee more interesting and ease in updating information than the bulletin board. The information presented in the form of Running Text, Latest Information, Agenda, Academic Calendar and Video provide an interesting presentation and in line with technological developments at the Politeknik Negeri Bali. Through this research is expected to create software “Networking Visual Display Information Unit” with optimal bandwidth usage by combining local data sources and data through the network. This research produces visual display design with optimal bandwidth usage and application in the form of supporting software.
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
Evaluation of tactual displays for flight control
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.; Triggs, T. J.
1973-01-01
Manual tracking experiments were conducted to determine the suitability of tactual displays for presenting flight-control information in multitask situations. Although tracking error scores are considerably greater than scores obtained with a continuous visual display, preliminary results indicate that inter-task interference effects are substantially less with the tactual display in situations that impose high visual scanning workloads. The single-task performance degradation found with the tactual display appears to be a result of the coding scheme rather than the use of the tactual sensory mode per se. Analysis with the state-variable pilot/vehicle model shows that reliable predictions of tracking errors can be obtained for wide-band tracking systems once the pilot-related model parameters have been adjusted to reflect the pilot-display interaction.
Time delays in flight simulator visual displays
NASA Technical Reports Server (NTRS)
Crane, D. F.
1980-01-01
It is pointed out that the effects of delays of less than 100 msec in visual displays on pilot dynamic response and system performance are of particular interest at this time because improvements in the latest computer-generated imagery (CGI) systems are expected to reduce CGI displays delays to this range. Attention is given to data which quantify the effects of display delays in the range of 0-100 msec on system stability and performance, and pilot dynamic response for a particular choice of aircraft dynamics, display, controller, and task. The conventional control system design methods are reviewed, the pilot response data presented, and data for long delays, all suggest lead filter compensation of display delay. Pilot-aircraft system crossover frequency information guides compensation filter specification.
Information transfer rate with serial and simultaneous visual display formats
NASA Astrophysics Data System (ADS)
Matin, Ethel; Boff, Kenneth R.
1988-04-01
Information communication rate for a conventional display with three spatially separated windows was compared with rate for a serial display in which data frames were presented sequentially in one window. For both methods, each frame contained a randomly selected digit with various amounts of additional display 'clutter.' Subjects recalled the digits in a prescribed order. Large rate differences were found, with faster serial communication for all levels of the clutter factors. However, the rate difference was most pronounced for highly cluttered displays. An explanation for the latter effect in terms of visual masking in the retinal periphery was supported by the results of a second experiment. The working hypothesis that serial displays can speed information transfer for automatic but not for controlled processing is discussed.
Chasing the negawatt: visualization for sustainable living.
Bartram, Lyn; Rodgers, Johnny; Muise, Kevin
2010-01-01
Energy and resource management is an important and growing research area at the intersection of conservation, sustainable design, alternative energy production, and social behavior. Energy consumption can be significantly reduced by simply changing how occupants inhabit and use buildings, with little or no additional costs. Reflecting this fact, an emerging measure of grid energy capacity is the negawatt: a unit of power saved by increasing efficiency or reducing consumption.Visualization clearly has an important role in enabling residents to understand and manage their energy use. This role is tied to providing real-time feedback of energy use, which encourages people to conserve energy.The challenge is to understand not only what kinds of visualizations are most effective but also where and how they fit into a larger information system to help residents make informed decisions. In this article, we also examine the effective display of home energy-use data using a net-zero solar-powered home (North House) and the Adaptive Living Interface System (ALIS), North House's information backbone.
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Wendy
The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.
Subjective and objective evaluation of visual fatigue on viewing 3D display continuously
NASA Astrophysics Data System (ADS)
Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang
2015-03-01
In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.
Brown, Alan P; Drew, Philip; Knight, Brian; Marc, Philippe; Troth, Sean; Wuersch, Kuno; Zandee, Joyce
2016-12-01
Histopathology data comprise a critical component of pharmaceutical toxicology studies and are typically presented as finding incidence counts and severity scores per organ, and tabulated on multiple pages which can be challenging for review and aggregation of results. However, the SEND (Standard for Exchange of Nonclinical Data) standard provides a means for collecting and managing histopathology data in a uniform fashion which can allow informatics systems to archive, display and analyze data in novel ways. Various software applications have become available to convert histopathology data into graphical displays for analyses. A subgroup of the FDA-PhUSE Nonclinical Working Group conducted intra-industry surveys regarding the use of graphical displays of histopathology data. Visual cues, use-cases, the value of cross-domain and cross-study visualizations, and limitations were topics for discussion in the context of the surveys. The subgroup came to the following conclusions. Graphical displays appear advantageous as a communication tool to both pathologists and non-pathologists, and provide an efficient means for communicating pathology findings to project teams. Graphics can support hypothesis-generation which could include cross-domain interactive visualizations and/-or aggregating large datasets from multiple studies to observe and/or display patterns and trends. Incorporation of the SEND standard will provide a platform by which visualization tools will be able to aggregate, select and display information from complex and disparate datasets. Copyright © 2016 Elsevier Inc. All rights reserved.
2006-06-01
allowing substantial see-around capability. Regions of visual suppression due to binocular rivalry ( luning ) are shown along the shaded flanks of...that the visual suppression of binocular rivalry, luning , (Velger, 1998, p.56-58) associated with the partial overlap conditions did not materially...tags were displayed. Thus, the frequency of conflicting binocular contours was reduced. In any case, luning does not seem to introduce major
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-01-01
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189
ERIC Educational Resources Information Center
Weisberg, Michael
Many of the findings from ergonomics research on visual display workstations are relevant to the design of interactive learning stations. This 1993 paper briefly reviews ergonomics research on visual display workstations; specifically, (1) potential health hazards from electromagnetic radiation; (2) musculoskeletal disorders; (3)vision complaints;…
Study of Man-Machine Communications Systems for the Handicapped. Interim Report.
ERIC Educational Resources Information Center
Kafafian, Haig
Newly developed communications systems for exceptional children include Cybercom; CYBERTYPE; Cyberplace, a keyless keyboard; Cyberphone, a telephonic communication system for deaf and speech impaired persons; Cyberlamp, a visual display; Cyberview, a fiber optic bundle remote visual display; Cybersem, an interface for the blind, fingerless, and…
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Kamine, Tovy Haber; Bendrick, Gregg A.
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. cones ) of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of Maximum Eye Movement. However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of Easy Eye Movement, though all were within the cone of Maximum Eye Movement. All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Most instrument displays in conventional aircraft lay within the cone of Easy Eye Movement, though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight. The learning objectives include: 1) Know three physiologic cones of eye/head movement; 2) Understand how instrument displays comply with these design principles in conventional aircraft and an uninhabited aerial vehicle system. Which of the following is NOT a recognized physiologic principle of instrument display design? Cone of Easy Eye Movement 2) Cone of Binocular Eye Movement 3) Cone of Maximum Eye Movement 4) Cone of Head Movement 5) None of the above. Answer: # 2) Cone of Binocular Eye Movement
A tactual pilot aid for the approach-and-landing task: Inflight studies
NASA Technical Reports Server (NTRS)
Gilson, R. D.; Fenton, R. E.
1973-01-01
A pilot aid -- a kinesthetic-tactual compensatory display -- for assisting novice pilots in various inflight situations has undergone preliminary inflight testing. The efficacy of this display, as compared with two types of visual displays, was evaluated in both a highly structured approach-and-landing task and a less structured test involving tight turns about a point. In both situations, the displayed quantity was the deviation (alpha sub 0 - alpha) in angle at attack from a desired value alpha sub 0. In the former, the performance with the tactual display was comparable with that obtained using a visual display of (alpha sub 0 - alpha), while in the later, substantial improvements (reduced tracking error (55%), decreased maximum altitude variations (67%), and decreased speed variations (43%)), were obtained using the tactual display. It appears that such a display offers considerable potential for inflight use.
Display characterization by eye: contrast ratio and discrimination throughout the grayscale
NASA Astrophysics Data System (ADS)
Gille, Jennifer; Arend, Larry; Larimer, James O.
2004-06-01
We have measured the ability of observers to estimate the contrast ratio (maximum white luminance / minimum black or gray) of various displays and to assess luminous discrimination over the tonescale of the display. This was done using only the computer itself and easily-distributed devices such as neutral density filters. The ultimate goal of this work is to see how much of the characterization of a display can be performed by the ordinary user in situ, in a manner that takes advantage of the unique abilities of the human visual system and measures visually important aspects of the display. We discuss the relationship among contrast ratio, tone scale, display transfer function and room lighting. These results may contribute to the development of applications that allow optimization of displays for the situated viewer / display system without instrumentation and without indirect inferences from laboratory to workplace.
Signal enhancement, not active suppression, follows the contingent capture of visual attention.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
2017-02-01
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ceiling art in a radiation therapy department: its effect on patient treatment experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonett, Jotham
A new initiative has been implemented at the Sunshine Hospital Radiation Therapy Centre, to provide a calming and comforting environment for patients attending radiation therapy treatment. As part of this initiative, the department's computed tomography (CT) room and radiation therapy bunkers were designed to incorporate ceiling art that replicates a number of different visual scenes. The study was undertaken to determine if ceiling art in the radiation therapy treatment CT and treatment bunkers had an effect on a patient's experience during treatment at the department. Additionally, the study aimed to identify which of the visuals in the ceiling art weremore » most preferred by patients. Patients were requested to complete a 12-question survey. The survey solicited a patient's opinion/perception on the unit's unique ceiling display with emphasis on aesthetic appeal, patient treatment experience and the patient's engagement due to the ceiling display. The responses were dichotomised to ‘positive’ or ‘negative’. Every sixth patient who completed the survey was invited to have a general face-to-face discussion to provide further information about their thoughts on the displays. The results demonstrate that the ceiling artwork solicited a positive reaction in 89.8% of patients surveyed. This score indicates that ceiling artwork contributed positively to patients’ experiences during radiation therapy treatment. The study suggests that ceiling artwork in the department has a positive effect on patient experience during their radiation therapy treatment at the department.« less
Augmented reality 3D display based on integral imaging
NASA Astrophysics Data System (ADS)
Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua
2017-02-01
Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.
Evaluating Middle School Students' Spatial-scientific Performance in Earth-space Science
NASA Astrophysics Data System (ADS)
Wilhelm, Jennifer; Jackson, C.; Toland, M. D.; Cole, M.; Wilhelm, R. J.
2013-06-01
Many astronomical concepts cannot be understood without a developed understanding of four spatial-mathematics domains defined as follows: a) Geometric Spatial Visualization (GSV) - Visualizing the geometric features of a system as it appears above, below, and within the system’s plane; b) Spatial Projection (SP) - Projecting to a different location and visualizing from that global perspective; c) Cardinal Directions (CD) - Distinguishing directions (N, S, E, W) in order to document an object’s vector position in space; and d) Periodic Patterns - (PP) Recognizing occurrences at regular intervals of time and/or space. For this study, differences were examined between groups of sixth grade students’ spatial-scientific development pre/post implementation of an Earth/Space unit. Treatment teachers employed a NASA-based curriculum (Realistic Explorations in Astronomical Learning), while control teachers implemented their regular Earth/Space units. A 2-level hierarchical linear model was used to evaluate student performance on the Lunar Phases Concept Inventory (LPCI) and four spatial-mathematics domains, while controlling for two variables (gender and ethnicity) at the student level and one variable (teaching experience) at the teacher level. Overall LPCI results show pre-test scores predicted post-test scores, boys performed better than girls, and Whites performed better than non-Whites. We also compared experimental and control groups’ by spatial-mathematics domain outcomes. For GSV, it was found that boys, in general, tended to have higher GSV post-scores. For domains CD and SP, no statistically significant differences were observed. PP results show Whites performed better than non-Whites. Also for PP, a significant cross-level interaction term (gender-treatment) was observed, which means differences in control and experimental groups are dependent on students’ gender. These findings can be interpreted as: (a) the experimental girls scored higher than the control girls and/or (b) the control group displayed a gender gap in favor of boys while no gender gap was displayed within the experimental group.
The effects of task difficulty on visual search strategy in virtual 3D displays.
Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa
2013-08-28
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.
Optimum viewing distance for target acquisition
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
2015-05-01
Human visual system (HVS) "resolution" (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten's contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.
Pixels, people, perception, pet peeves, and possibilities: a look at displays
NASA Astrophysics Data System (ADS)
Task, H. Lee
2007-04-01
This year marks the 35 th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of experience in the display and display related areas.
NASA Technical Reports Server (NTRS)
Aretz, Anthony J.
1990-01-01
This paper presents a cognitive model of a pilot's navigation task and describes an experiment comparing a visual momentum map display to the traditional track-up and north-up approaches. The data show the advantage to a track-up map is its congruence with the ego-centered forward view; however, the development of survey knowledge is hindered by the inconsistency of the rotating display. The stable alignment of a north-up map aids the acquisition of survey knowledge, but there is a cost associated with the mental rotation of the display to a track-up alignment for ego-centered tasks. The results also show that visual momentum can be used to reduce the mental rotation costs of a north-up display.
A tactual display aid for primary flight training
NASA Technical Reports Server (NTRS)
Gilson, R. D.
1979-01-01
A means of flight instruction is discussed. In addition to verbal assistance, control feedback was continously presented via a nonvisual means utilizing touch. A kinesthetic-tactile (KT) display was used as a readout and tracking device for a computer generated signal of desired angle of attack during the approach and landing. Airspeed and glide path information was presented via KT or visual heads up display techniques. Performance with the heads up display of pitch information was shown to be significantly better than performance with the KT pitch display. Testing without the displays showed that novice pilots who had received tactile pitch error information performed both pitch and throttle control tasks significantly better than those who had received the same information from the visual heads up display of pitch during the test series of approaches to landing.
Strength of visual interpolation depends on the ratio of physically specified to total edge length.
Shipley, T F; Kellman, P J
1992-07-01
We report four experiments in which the strength of edge interpolation in illusory figure displays was tested. In Experiment 1, we investigated the relative contributions of the lengths of luminance-specified edges and the gaps between them to perceived boundary clarity as measured by using a magnitude estimation procedure. The contributions of these variables were found to be best characterized by a ratio of the length of luminance-specified contour to the length of the entire edge (specified plus interpolated edge). Experiment 2 showed that this ratio predicts boundary clarity for a wide range of ratio values and display sizes. There was no evidence that illusory figure boundaries are clearer in displays with small gaps than they are in displays with larger gaps and equivalent ratios. In Experiment 3, using a more sensitive pairwise comparison paradigm, we again found no such effect. Implications for boundary interpolation in general, including perception of partially occluded objects, are discussed. The dependence of interpolation on the ratio of physically specified edges to total edge length has the desirable ecological consequence that unit formation will not change with variations in viewing distance.
NASA Astrophysics Data System (ADS)
Irisawa, Kaku; Murakoshi, Dai; Hashimoto, Atsushi; Yamamoto, Katsuya; Hayakawa, Toshiro
2017-03-01
Visualization of the tip of medical devices like needles or catheters under ultrasound imaging has been a continuous topic since the early 1980's. In this study, a needle tip visualization system utilizing photoacoustic effects is proposed. In order to visualize the needle tip, an optical fiber was inserted into a needle. The optical fiber tip is placed on the needle bevel and affixed with black glue. The pulsed laser light from laser diode was transferred to the optical fiber and converted to ultrasound due to laser light absorption of the black glue and the subsequent photoacoustic effect. The ultrasound is detected by transducer array and reconstructed into photoacoustic images in the ultrasound unit. The photoacoustic image is displayed with a superposed ultrasound B-mode image. As a system evaluation, the needle is punctured into bovine meat and the needle tip is observed with commercialized conventional linear transducers or convex transducers. The needle tip is visualized clearly at 7 and 12 cm depths with linear and convex probes, respectively, even with a steep needle puncture angle of around 90 degrees. Laser and acoustic outputs, and thermal rise at the needle tip, were measured and were well below the limits of the safety standards. Compared with existing needle tip visualization technologies, the photoacoustic needle tip visualization system has potential distinguishable features for clinical procedures related with needle puncture and injection.
Stereoscopic display of 3D models for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2006-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
An ethogram for Benthic Octopods (Cephalopoda: Octopodidae).
Mather, Jennifer A; Alupay, Jean S
2016-05-01
The present paper constructs a general ethogram for the actions of the flexible body as well as the skin displays of octopuses in the family Octopodidae. The actions of 6 sets of structures (mantle-funnel, arms, sucker-stalk, skin-web, head, and mouth) combine to produce behavioral units that involve positioning of parts leading to postures such as the flamboyant, movements of parts of the animal with relation to itself including head bob and grooming, and movements of the whole animal by both jetting in the water and crawling along the substrate. Muscular actions result in 4 key changes in skin display: (a) chromatophore expansion, (b) chromatophore contraction resulting in appearance of reflective colors such as iridophores and leucophores, (c) erection of papillae on the skin, and (d) overall postures of arms and mantle controlled by actions of the octopus muscular hydrostat. They produce appearances, including excellent camouflage, moving passing cloud and iridescent blue rings, with only a few known species-specific male visual sexual displays. Commonalities across the family suggest that, despite having flexible muscular hydrostat movement systems producing several behavioral units, simplicity of production may underlie the complexity of movement and appearance. This systematic framework allows researchers to take the next step in modeling how such diversity can be a combination of just a few variables. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The pedagogical toolbox: computer-generated visual displays, classroom demonstration, and lecture.
Bockoven, Jerry
2004-06-01
This analogue study compared the effectiveness of computer-generated visual displays, classroom demonstration, and traditional lecture as methods of instruction used to teach neuronal structure and processes. Randomly assigned 116 undergraduate students participated in 1 of 3 classrooms in which they experienced the same content but different teaching approaches presented by 3 different student-instructors. Then participants completed a survey of their subjective reactions and a measure of factual information designed to evaluate objective learning outcomes. Participants repeated this factual measure 5 wk. later. Results call into question the use of classroom demonstration methods as well as the trend towards devaluing traditional lecture in favor of computer-generated visual display.
Displays. [three dimensional analog visual system for aiding pilot space perception
NASA Technical Reports Server (NTRS)
1974-01-01
An experimental investigation made to determine the depth cue of a head movement perspective and image intensity as a function of depth is summarized. The experiment was based on the use of a hybrid computer generated contact analog visual display in which various perceptual depth cues are included on a two dimensional CRT screen. The system's purpose was to impart information, in an integrated and visually compelling fashion, about the vehicle's position and orientation in space. Results show head movement gives a 40% improvement in depth discrimination when the display is between 40 and 100 cm from the subject; intensity variation resulted in as much improvement as head movement.
1978-10-01
Garner, W.R. and C.G. Creelman , "Effect of Redundancy and Duration on Absolute Judgments of Visual Stimuli, " Journal of I.xperimental Psychology , 67...34Laws of Visual Choice Reaction Time, Psychological Review, 81, 1, 1974, pp. 75-98. 18Hitt, W.D., "An Evaluation of Five Different Abstract Coding...for Visual D)isplays, "’ Offi~cc of Naval Recsearchi Contract No: N00014-68-C’- 02711, Office of Naval Research, Engineering Psychology B1ranch
Display technology - Human factors concepts
NASA Astrophysics Data System (ADS)
Stokes, Alan; Wickens, Christopher; Kite, Kirsten
1990-03-01
Recent advances in the design of aircraft cockpit displays are reviewed, with an emphasis on their applicability to automobiles. The fundamental principles of display technology are introduced, and individual chapters are devoted to selective visual attention, command and status displays, foveal and peripheral displays, navigational displays, auditory displays, color and pictorial displays, head-up displays, automated systems, and dual-task performance and pilot workload. Diagrams, drawings, and photographs of typical displays are provided.
Numerosity underestimation with item similarity in dynamic visual display.
Au, Ricky K C; Watanabe, Katsumi
2013-01-01
The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.
Optimizing Cognitive Load for Learning from Computer-Based Science Simulations
ERIC Educational Resources Information Center
Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.
2006-01-01
How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…
Designing a Visual Factors-Based Screen Display Interface: The New Role of the Graphic Technologist.
ERIC Educational Resources Information Center
Faiola, Tony; DeBloois, Michael L.
1988-01-01
Discusses the role of the graphic technologist in preparing computer screen displays for interactive videodisc systems, and suggests screen design guidelines. Topics discussed include the grid system; typography; visual factors research; color; course mobility through branching and software menus; and a model of course integration. (22 references)…
ERIC Educational Resources Information Center
Huettig, Falk; McQueen, James M.
2007-01-01
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…
Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search
ERIC Educational Resources Information Center
Geringswald, Franziska; Pollmann, Stefan
2015-01-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…
Use of Linear Perspective Scene Cues in a Simulated Height Regulation Task
NASA Technical Reports Server (NTRS)
Levison, W. H.; Warren, R.
1984-01-01
As part of a long-term effort to quantify the effects of visual scene cuing and non-visual motion cuing in flight simulators, an experimental study of the pilot's use of linear perspective cues in a simulated height-regulation task was conducted. Six test subjects performed a fixed-base tracking task with a visual display consisting of a simulated horizon and a perspective view of a straight, infinitely-long roadway of constant width. Experimental parameters were (1) the central angle formed by the roadway perspective and (2) the display gain. The subject controlled only the pitch/height axis; airspeed, bank angle, and lateral track were fixed in the simulation. The average RMS height error score for the least effective display configuration was about 25% greater than the score for the most effective configuration. Overall, larger and more highly significant effects were observed for the pitch and control scores. Model analysis was performed with the optimal control pilot model to characterize the pilot's use of visual scene cues, with the goal of obtaining a consistent set of independent model parameters to account for display effects.
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Visual-conformal display format for helicopter guidance
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas
2014-06-01
Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530
NASA Astrophysics Data System (ADS)
Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration
2017-10-01
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
An Interactive Visual Analytics Framework for Multi-Field Data in a Geo-Spatial Context
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhiyuan; Tong, Xiaonan; McDonnell, Kevin T.
2013-04-01
Climate research produces a wealth of multivariate data. These data often have a geospatial reference and so it is of interest to show them within their geospatial context. One can consider this configuration as a multi field visualization problem, where the geospace provides the expanse of the field. However, there is a limit on the amount of multivariate information that can be fit within a certain spatial location, and the use of linked multivari ate information displays has previously been devised to bridge this gap. In this paper we focus on the interactions in the geographical display, present an implementationmore » that uses Google Earth, and demonstrate it within a tightly linked parallel coordinates display. Several other visual representations, such as pie and bar charts are integrated into the Google Earth display and can be interactively manipulated. Further, we also demonstrate new brushing and visualization techniques for parallel coordinates, such as fixedwindow brushing and correlationenhanced display. We conceived our system with a team of climate researchers, who already made a few important discov eries using it. This demonstrates our system’s great potential to enable scientific discoveries, possibly also in oth er domains where data have a geospatial reference.« less
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.
Hemispheric differences in recognizing upper and lower facial displays of emotion.
Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D
2001-01-01
To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.
Tactile cueing effects on performance in simulated aerial combat with high acceleration.
van Erp, Jan B F; Eriksson, Lars; Levin, Britta; Carlander, Otto; Veltman, J A; Vos, Wouter K
2007-12-01
Recent evidence indicates that vibrotactile displays can potentially reduce the risk of sensory and cognitive overload. Before these displays can be introduced in super agile aircraft, it must be ascertained that vibratory stimuli can be sensed and interpreted by pilots subjected to high G loads. Each of 9 pilots intercepted 32 targets in the Swedish Dynamic Flight Simulator. Targets were indicated on simulated standard Gripen visual displays. In addition, in half of the trials target direction was also displayed on a 60-element tactile torso display. Performance measures and subjective ratings were recorded. Each pilot pulled G peaks above +8 Gz. With tactile cueing present, mean reaction time was reduced from 1458 ms (SE = 54) to 1245 ms (SE = 88). Mean total chase time for targets that popped up behind the pilot's aircraft was reduced from 13 s (SE = 0.45) to 12 s (SE = 0.41). Pilots rated the tactile display favorably over the visual displays at target pop-up on the easiness of detecting a threat presence and on the clarity of initial position of the threats. This study is the first to show that tactile display information is perceivable and useful in hypergravity (up to +9 Gz). The results show that the tactile display can capture attention at threat pop-up and improve threat awareness for threats in the back, even in the presence of high-end visual displays. It is expected that the added value of tactile displays may further increase after formal training and in situations of unexpected target pop-up.
NASA Astrophysics Data System (ADS)
Lahti, Paul M.; Motyka, Eric J.; Lancashire, Robert J.
2000-05-01
A straightforward procedure is described to combine computation of molecular vibrational modes using commonly available molecular modeling programs with visualization of the modes using advanced features of the MDL Information Systems Inc. Chime World Wide Web browser plug-in. Minor editing of experimental spectra that are stored in the JCAMP-DX format allows linkage of IR spectral frequency ranges to Chime molecular display windows. The spectra and animation files can be combined by Hypertext Markup Language programming to allow interactive linkage between experimental spectra and computationally generated vibrational displays. Both the spectra and the molecular displays can be interactively manipulated to allow the user maximum control of the objects being viewed. This procedure should be very valuable not only for aiding students through visual linkage of spectra and various vibrational animations, but also by assisting them in learning the advantages and limitations of computational chemistry by comparison to experiment.
Comparison of two head-up displays in simulated standard and noise abatement night visual approaches
NASA Technical Reports Server (NTRS)
Cronn, F.; Palmer, E. A., III
1975-01-01
Situation and command head-up displays were evaluated for both standard and two segment noise abatement night visual approaches in a fixed base simulation of a DC-8 transport aircraft. The situation display provided glide slope and pitch attitude information. The command display provided glide slope information and flight path commands to capture a 3 deg glide slope. Landing approaches were flown in both zero wind and wind shear conditions. For both standard and noise abatement approaches, the situation display provided greater glidepath accuracy in the initial phase of the landing approaches, whereas the command display was more effective in the final approach phase. Glidepath accuracy was greater for the standard approaches than for the noise abatement approaches in all phases of the landing approach. Most of the pilots preferred the command display and the standard approach. Substantial agreement was found between each pilot's judgment of his performance and his actual performance.
Working memory dependence of spatial contextual cueing for visual search.
Pollmann, Stefan
2018-05-10
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.
Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru
2015-01-01
Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.
NASA Astrophysics Data System (ADS)
Morozov, Alexander; Dubinin, German; Dubynin, Sergey; Yanusik, Igor; Kim, Sun Il; Choi, Chil-Sung; Song, Hoon; Lee, Hong-Seok; Putilin, Andrey; Kopenkin, Sergey; Borodin, Yuriy
2017-06-01
Future commercialization of glasses-free holographic real 3D displays requires not only appropriate image quality but also slim design of backlight unit and whole display device to match market needs. While a lot of research aimed to solve computational issues of forming Computer Generated Holograms for 3D Holographic displays, less focus on development of backlight units suitable for 3D holographic display applications with form-factor of conventional 2D display systems. Thereby, we report coherent backlight unit for 3D holographic display with thickness comparable to commercially available 2D displays (cell phones, tablets, laptops, etc.). Coherent backlight unit forms uniform, high-collimated and effective illumination of spatial light modulator. Realization of such backlight unit is possible due to holographic optical elements, based on volume gratings, constructing coherent collimated beam to illuminate display plane. Design, recording and measurement of 5.5 inch coherent backlight unit based on two holographic optical elements are presented in this paper.
Search time critically depends on irrelevant subset size in visual search.
Benjamins, Jeroen S; Hooge, Ignace T C; van Elst, Jacco C; Wertheim, Alexander H; Verstraten, Frans A J
2009-02-01
In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.
MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.
Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk
2016-03-18
Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .
How colorful! A feature it is, isn't it?
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz
2015-01-01
A display's color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design font characteristics? On the other hand, display manufactures try hard in addressing the color display's dilemma: smaller pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain? How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20 vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How linear is human visual contrast perception around band limit of a display's spatial resolution? How colorful does the rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and color subpixel rendering?
Perceived change in orientation from optic flow in the central visual field
NASA Technical Reports Server (NTRS)
Dyre, Brian P.; Andersen, George J.
1988-01-01
The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
[Microcomputer control of a LED stimulus display device].
Ohmoto, S; Kikuchi, T; Kumada, T
1987-02-01
A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.
Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.
Grubert, Anna; Eimer, Martin
2015-11-11
During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Lapolla, M. V.; Horblit, B.
1995-01-01
A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.
Toward the establishment of design guidelines for effective 3D perspective interfaces
NASA Astrophysics Data System (ADS)
Fitzhugh, Elisabeth; Dixon, Sharon; Aleva, Denise; Smith, Eric; Ghrayeb, Joseph; Douglas, Lisa
2009-05-01
The propagation of information operation technologies, with correspondingly vast amounts of complex network information to be conveyed, significantly impacts operator workload. Information management research is rife with efforts to develop schemes to aid operators to identify, review, organize, and retrieve the wealth of available data. Data may take on such distinct forms as intelligence libraries, logistics databases, operational environment models, or network topologies. Increased use of taxonomies and semantic technologies opens opportunities to employ network visualization as a display mechanism for diverse information aggregations. The broad applicability of network visualizations is still being tested, but in current usage, the complexity of densely populated abstract networks suggests the potential utility of 3D. Employment of 2.5D in network visualization, using classic perceptual cues, creates a 3D experience within a 2D medium. It is anticipated that use of 3D perspective (2.5D) will enhance user ability to visually inspect large, complex, multidimensional networks. Current research for 2.5D visualizations demonstrates that display attributes, including color, shape, size, lighting, atmospheric effects, and shadows, significantly impact operator experience. However, guidelines for utilization of attributes in display design are limited. This paper discusses pilot experimentation intended to identify potential problem areas arising from these cues and determine how best to optimize perceptual cue settings. Development of optimized design guidelines will ensure that future experiments, comparing network displays with other visualizations, are not confounded or impeded by suboptimal attribute characterization. Current experimentation is anticipated to support development of cost-effective, visually effective methods to implement 3D in military applications.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
ERIC Educational Resources Information Center
Thomson, Hilary J.; Thomas, Sian
2013-01-01
Visual display of reported impacts is a valuable aid to both reviewers and readers of systematic reviews. Forest plots are routinely prepared to report standardised effect sizes, but where standardised effect sizes are not available for all included studies a forest plot may misrepresent the available evidence. Tabulated data summaries to…
ERIC Educational Resources Information Center
Hout, Michael C.; Goldinger, Stephen D.
2012-01-01
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…
Designing for Persuasion: Toward Ambient Eco-Visualization for Awareness
NASA Astrophysics Data System (ADS)
Kim, Tanyoung; Hong, Hwajung; Magerko, Brian
When people are aware of their lifestyle's ecological consequences, they are more likely to adjust their behavior to reduce their impact. Persuasive design that provides feedback to users without interfering with their primary tasks can increases the awareness of neighboring problems. As a case study of design for persuasion, we designed two ambient displays as desktop widgets. Both represent a users' computer usage time, but in different visual styles. In this paper, we present the results of a comparative study of two ambient displays. We discuss the gradual progress of persuasion supported by the ambient displays and the differences in users' perception affected by the different visualization styles. Finally, Our empirical findings lead to a series of design implications for persuasive media.
General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.
2014-01-01
A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.
The effects of task difficulty on visual search strategy in virtual 3D displays
Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa
2013-01-01
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed wi th respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the use ful specifications of augmented reality displays, an optical see-thro ugh display was used in an ATC Tower simulation. Three different binocular fields of view (14 deg, 28 deg, and 47 deg) were examined to det ermine their effect on subjects# ability to detect aircraft maneuveri ng and landing. The results suggest that binocular fields of view much greater than 47 deg are unlikely to dramatically improve search perf ormance and that partial binocular overlap is a feasible display tech nique for augmented reality Tower applications.
Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.
Hoffman, David M; Girshick, Ahna R; Akeley, Kurt; Banks, Martin S
2008-03-28
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.
A Cu²⁺-selective fluorescent chemosensor based on BODIPY with two pyridine ligands and logic gate.
Huang, Liuqian; Zhang, Jing; Yu, Xiaoxiu; Ma, Yifan; Huang, Tianjiao; Shen, Xi; Qiu, Huayu; He, Xingxing; Yin, Shouchun
2015-06-15
A novel near-infrared fluorescent chemosensor based on BODIPY (Py-1) has been synthesized and characterized. Py-1 displays high selectivity and sensitivity for sensing Cu(2+) over other metal ions in acetonitrile. Upon addition of Cu(2+) ions, the maximum absorption band of Py-1 in CH3CN displays a red shift from 603 to 608 nm, which results in a visual color change from pink to blue. When Py-1 is excited at 600 nm in the presence of Cu(2+), the fluorescent emission intensity of Py-1 at 617 nm is quenched over 86%. Notably, the complex of Py-1-Cu(2+) can be restored with the introduction of EDTA or S(2-). Consequently, an IMPLICATION logic gate at molecular level operating in fluorescence mode with Cu(2+) and S(2-) as chemical inputs can be constructed. Finally, based on the reversible and reproducible system, a nanoscale sequential memory unit displaying "Writing-Reading-Erasing-Reading" functions can be integrated. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Demir, I.
2014-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.
Vision improvement in pilots with presbyopia following perceptual learning.
Sterkin, Anna; Levy, Yuval; Pokroy, Russell; Lev, Maria; Levian, Liora; Doron, Ravid; Yehezkel, Oren; Fried, Moshe; Frenkel-Nir, Yael; Gordon, Barak; Polat, Uri
2017-11-24
Israeli Air Force (IAF) pilots continue flying combat missions after the symptoms of natural near-vision deterioration, termed presbyopia, begin to be noticeable. Because modern pilots rely on the displays of the aircraft control and performance instruments, near visual acuity (VA) is essential in the cockpit. We aimed to apply a method previously shown to improve visual performance of presbyopes, and test whether presbyopic IAF pilots can overcome the limitation imposed by presbyopia. Participants were selected by the IAF aeromedical unit as having at least initial presbyopia and trained using a structured personalized perceptual learning method (GlassesOff application), based on detecting briefly presented low-contrast Gabor stimuli, under the conditions of spatial and temporal constraints, from a distance of 40 cm. Our results show that despite their initial visual advantage over age-matched peers, training resulted in robust improvements in various basic visual functions, including static and temporal VA, stereoacuity, spatial crowding, contrast sensitivity and contrast discrimination. Moreover, improvements generalized to higher-level tasks, such as sentence reading and aerial photography interpretation (specifically designed to reflect IAF pilots' expertise in analyzing noisy low-contrast input). In concert with earlier suggestions, gains in visual processing speed are plausible to account, at least partially, for the observed training-induced improvements. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Review of Visual Representations of Physiologic Data
2016-01-01
Background Physiological data is derived from electrodes attached directly to patients. Modern patient monitors are capable of sampling data at frequencies in the range of several million bits every hour. Hence the potential for cognitive threat arising from information overload and diminished situational awareness becomes increasingly relevant. A systematic review was conducted to identify novel visual representations of physiologic data that address cognitive, analytic, and monitoring requirements in critical care environments. Objective The aims of this review were to identify knowledge pertaining to (1) support for conveying event information via tri-event parameters; (2) identification of the use of visual variables across all physiologic representations; (3) aspects of effective design principles and methodology; (4) frequency of expert consultations; (5) support for user engagement and identifying heuristics for future developments. Methods A review was completed of papers published as of August 2016. Titles were first collected and analyzed using an inclusion criteria. Abstracts resulting from the first pass were then analyzed to produce a final set of full papers. Each full paper was passed through a data extraction form eliciting data for comparative analysis. Results In total, 39 full papers met all criteria and were selected for full review. Results revealed great diversity in visual representations of physiological data. Visual representations spanned 4 groups including tabular, graph-based, object-based, and metaphoric displays. The metaphoric display was the most popular (n=19), followed by waveform displays typical to the single-sensor-single-indicator paradigm (n=18), and finally object displays (n=9) that utilized spatiotemporal elements to highlight changes in physiologic status. Results obtained from experiments and evaluations suggest specifics related to the optimal use of visual variables, such as color, shape, size, and texture have not been fully understood. Relationships between outcomes and the users’ involvement in the design process also require further investigation. A very limited subset of visual representations (n=3) support interactive functionality for basic analysis, while only one display allows the user to perform analysis including more than one patient. Conclusions Results from the review suggest positive outcomes when visual representations extend beyond the typical waveform displays; however, there remain numerous challenges. In particular, the challenge of extensibility limits their applicability to certain subsets or locations, challenge of interoperability limits its expressiveness beyond physiologic data, and finally the challenge of instantaneity limits the extent of interactive user engagement. PMID:27872033
Real-world spatial regularities affect visual working memory for objects.
Kaiser, Daniel; Stein, Timo; Peelen, Marius V
2015-12-01
Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.
The effects of mental representation on performance in a navigation task
NASA Astrophysics Data System (ADS)
Barshi, Immanuel
Most aviation accidents and incidents are attributed to human error. Among the various kinds of human errors found in aviation, problems in communication constitute a large majority. The purpose of this study is to understand some of the cognitive factors influencing these misunderstandings so they can be prevented. Five experiments tested individuals' ability to follow verbal instructions pertaining to navigating in space. The experiments simulated the kinds of instructions pilots receive from air traffic controllers. All five experiments show the importance of the mental representation of the task over and above the short-term memory demands. The results of Experiment 1 show that the number of instructional units is a critical factor, rather than the number of words per unit. The results of Experiment 2 show that when moving in a three dimensional space, it does not matter whether movement is required along all three dimensions or along only two of the three dimensions. The results of Experiment 3 show that individuals perform much better when they have to maintain a two-dimensional mental representation than when they have to maintain a three-dimensional mental representation. What is more, it shows that even immediate verbatim recall is affected by the representation of the situation to which the language input applies. The results of Experiments 4 and 5 show that the two-dimensional advantage found in Experiment 3 is indeed an aspect of the mental representation, rather than a result of translating a visual display into a mental representation. These results also suggest that three units is the capacity limit of short-term memory. Thus, to minimize misunderstandings due to message length, air traffic controllers are advised to limit their messages to no more than three instructions at a time. In addition to ATC procedures, this research has practical implications for computer/visual displays, and for training environments.
Visual memory performance for color depends on spatiotemporal context.
Olivers, Christian N L; Schreij, Daniel
2014-10-01
Performance on visual short-term memory for features has been known to depend on stimulus complexity, spatial layout, and feature context. However, with few exceptions, memory capacity has been measured for abruptly appearing, single-instance displays. In everyday life, objects often have a spatiotemporal history as they or the observer move around. In three experiments, we investigated the effect of spatiotemporal history on explicit memory for color. Observers saw a memory display emerge from behind a wall, after which it disappeared again. The test display then emerged from either the same side as the memory display or the opposite side. In the first two experiments, memory improved for intermediate set sizes when the test display emerged in the same way as the memory display. A third experiment then showed that the benefit was tied to the original motion trajectory and not to the display object per se. The results indicate that memory for color is embedded in a richer episodic context that includes the spatiotemporal history of the display.
Is eye damage caused by stereoscopic displays?
NASA Astrophysics Data System (ADS)
Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt
2000-05-01
A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.
Visual/motion cue mismatch in a coordinated roll maneuver
NASA Technical Reports Server (NTRS)
Shirachi, D. K.; Shirley, R. S.
1981-01-01
The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.
FPV: fast protein visualization using Java 3D.
Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen
2003-05-22
Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/
Lifting business process diagrams to 2.5 dimensions
NASA Astrophysics Data System (ADS)
Effinger, Philip; Spielmann, Johannes
2010-01-01
In this work, we describe our visualization approach for business processes using 2.5 dimensional techniques (2.5D). The idea of 2.5D is to add the concept of layering to a two dimensional (2D) visualization. The layers are arranged in a three-dimensional display space. For the modeling of the business processes, we use the Business Process Modeling Notation (BPMN). The benefit of connecting BPMN with a 2.5D visualization is not only to obtain a more abstract view on the business process models but also to develop layering criteria that eventually increase readability of the BPMN model compared to 2D. We present a 2.5D Navigator for BPMN models that offers different perspectives for visualization. Therefore we also develop BPMN specific perspectives. The 2.5D Navigator combines the 2.5D approach with perspectives and allows free navigation in the three dimensional display space. We also demonstrate our tool and libraries used for implementation of the visualizations. The underlying general framework for 2.5D visualizations is explored and presented in a fashion that it can easily be used for different applications. Finally, an evaluation of our navigation tool demonstrates that we can achieve satisfying and aesthetic displays of diagrams stating BPMN models in 2.5D-visualizations.
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
View-Dependent Streamline Deformation and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Xin; Edwards, John; Chen, Chun-Ming
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less
A Selected Bibliography of On-Line Visual Displays and Their Applications.
ERIC Educational Resources Information Center
Braidwood, J.
Contained in this bibliography are 312 references as they related to general principles and problems of information display, man-computer interaction, present and possible future display equipment, ergonomic aspects of display design, and current and potential applications, especially to information processing. (Author/MM)
Monkey pulvinar neurons fire differentially to snake postures.
Le, Quan Van; Isbell, Lynne A; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.
Psycho-physiological effects of visual artifacts by stereoscopic display systems
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Yoshitake, Junki; Morikawa, Hiroyuki; Kawai, Takashi; Yamada, Osamu; Iguchi, Akihiko
2011-03-01
The methods available for delivering stereoscopic (3D) display using glasses can be classified as time-multiplexing and spatial-multiplexing. With both methods, intrinsic visual artifacts result from the generation of the 3D image pair on a flat panel display device. In the case of the time-multiplexing method, an observer perceives three artifacts: flicker, the Mach-Dvorak effect, and a phantom array. These only occur under certain conditions, with flicker appearing in any conditions, the Mach-Dvorak effect during smooth pursuit eye movements (SPM), and a phantom array during saccadic eye movements (saccade). With spatial-multiplexing, the artifacts are temporal-parallax (due to the interlaced video signal), binocular rivalry, and reduced spatial resolution. These artifacts are considered one of the major impediments to the safety and comfort of 3D display users. In this study, the implications of the artifacts for the safety and comfort are evaluated by examining the psychological changes they cause through subjective symptoms of fatigue and the depth sensation. Physiological changes are also measured as objective responses based on analysis of heart and brain activation by visual artifacts. Further, to understand the characteristics of each artifact and the combined effects of the artifacts, four experimental conditions are developed and tested. The results show that perception of artifacts differs according to the visual environment and the display method. Furthermore visual fatigue and the depth sensation are influenced by the individual characteristics of each artifact. Similarly, heart rate variability and regional cerebral oxygenation changes by perception of artifacts in conditions.
AWE: Aviation Weather Data Visualization Environment
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Lodha, Suresh K.; Norvig, Peter (Technical Monitor)
2000-01-01
Weather is one of the major causes of aviation accidents. General aviation (GA) flights account for 92% of all the aviation accidents, In spite of all the official and unofficial sources of weather visualization tools available to pilots, there is an urgent need for visualizing several weather related data tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment AWE), presents graphical displays of meteorological observations, terminal area forecasts, and winds aloft forecasts onto a cartographic grid specific to the pilot's area of interest. Decisions regarding the graphical display and design are made based on careful consideration of user needs. Integral visual display of these elements of weather reports is designed for the use of GA pilots as a weather briefing and route selection tool. AWE provides linking of the weather information to the flight's path and schedule. The pilot can interact with the system to obtain aviation-specific weather for the entire area or for his specific route to explore what-if scenarios and make "go/no-go" decisions. The system, as evaluated by some pilots at NASA Ames Research Center, was found to be useful.
Pilot-Configurable Information on a Display Unit
NASA Technical Reports Server (NTRS)
Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)
2017-01-01
A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.
2014-01-01
Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686
Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E
2017-11-01
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test. Copyright © 2017 the American Physiological Society.
A Probabilistic Clustering Theory of the Organization of Visual Short-Term Memory
ERIC Educational Resources Information Center
Orhan, A. Emin; Jacobs, Robert A.
2013-01-01
Experimental evidence suggests that the content of a memory for even a simple display encoded in visual short-term memory (VSTM) can be very complex. VSTM uses organizational processes that make the representation of an item dependent on the feature values of all displayed items as well as on these items' representations. Here, we develop a…
ERIC Educational Resources Information Center
Demeyere, Nele; Humphreys, Glyn W.
2007-01-01
Evidence is presented for 2 modes of attention operating in simultanagnosia. The authors examined visual enumeration in a patient, GK, who has severe impairments in serially scanning across a scene and is unable to count the numbers of items in visual displays. However, GK's ability to judge the relative magnitude of 2 displays was consistently…
Response Grids: Practical Ways to Display Large Data Sets with High Visual Impact
ERIC Educational Resources Information Center
Gates, Simon
2013-01-01
Spreadsheets are useful for large data sets but they may be too wide or too long to print as conventional tables. Response grids offer solutions to the challenges posed by any large data set. They have wide application throughout science and for every subject and context where visual data displays are designed, within education and elsewhere.…
Acquisition of L2 Japanese Geminates: Training with Waveform Displays
ERIC Educational Resources Information Center
Motohashi-Saigo, Miki; Hardison, Debra M.
2009-01-01
The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV) and auditory-only (A-only) Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two…
Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping
ERIC Educational Resources Information Center
McDougall, Sine; Tyrer, Victoria; Folkard, Simon
2006-01-01
Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, Silvia
1993-01-01
The pilot's ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays, commonly used in Apache and Cobra helicopter night operations, originates from a relatively narrow field-of-view Forward Looking Infrared Radiation Camera, gimbal-mounted at the nose of the aircraft and slaved to the pilot's line-of-sight, in order to obtain a wide-angle field-of-regard. Pilots have encountered considerable difficulties in controlling the aircraft by these devices. Experimental simulator results presented here indicate that part of these difficulties can be attributed to head/camera slaving system phase lags and errors. In the presence of voluntary head rotation, these slaving system imperfections are shown to impair the Control-Oriented Visual Field Information vital in vehicular control, such as the perception of the anticipated flight path or the vehicle yaw rate. Since, in the presence of slaving system imperfections, the pilot will tend to minimize head rotation, the full wide-angle field-of-regard of the line-of-sight slaved Helmet-Mounted Display, is not always fully utilized.
Cartographic symbol library considering symbol relations based on anti-aliasing graphic library
NASA Astrophysics Data System (ADS)
Mei, Yang; Li, Lin
2007-06-01
Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.
Exploring virtual worlds with head-mounted displays
NASA Astrophysics Data System (ADS)
Chung, James C.; Harris, Mark R.; Brooks, F. P.; Fuchs, Henry; Kelley, Michael T.
1989-02-01
Research has been conducted in the use of simple head mounted displays in real world applications. Such units provide the user with non-holographic true 3-D information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a virtual world which behaves like the real world in some respects. UNC's head mounted display was built inexpensively from commercially available off-the-shelf components. Tracking of the user's head position and orientation is performed by a Polhemus Navigation Sciences' 3SPACE tracker. The host computer uses the tracking information to generate updated images corresponding to the user's new left eye and right eye views. The images are broadcast to two liquid crystal television screens (220x320 pixels) mounted on a horizontal shelf at the user's forehead. The user views these color screens through half-silvered mirrors, enabling the computer generated image to be superimposed upon the user's real physical environment. The head mounted display was incorporated into existing molecular and architectural applications being developed at UNC. In molecular structure studies, chemists are presented with a room sized molecule with which they can interact in a manner more intuitive than that provided by conventional 2-D displays and dial boxes. Walking around and through the large molecule may provide quicker understanding of its structure, and such problems as drug enzyme docking may be approached with greater insight.
Gestural Communication With Accelerometer-Based Input Devices and Tactile Displays
2008-12-01
and natural terrain obstructions, or concealment often impede visual communication attempts. To overcome some of these issues, “daisy-chaining” or...the intended recipients. Moreover, visual communication demands a focus on the visual modality possibly distracting a receiving soldier’s visual
Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.
Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L
2015-08-01
Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ATLAS event display: Virtual Point-1 visualization software
NASA Astrophysics Data System (ADS)
Seeley, Kaelyn; Dimond, David; Bianchi, R. M.; Boudreau, Joseph; Hong, Tae Min; Atlas Collaboration
2017-01-01
Virtual Point-1 (VP1) is an event display visualization software for the ATLAS Experiment. VP1 is a software framework that makes use of ATHENA, the ATLAS software infrastructure, to access the complete detector geometry. This information is used to draw graphics representing the components of the detector at any scale. Two new features are added to VP1. The first is a traditional ``lego'' plot, displaying the calorimeter energy deposits in eta-phi space. The second is another lego plot focusing on the forward endcap region, displaying the energy deposits in r-phi space. Currently, these new additions display the energy deposits based on the granularity of the middle layer of the liquid-Argon electromagnetic calorimeter. Since VP1 accesses the complete detector geometry and all experimental data, future developments are outlined for a more detailed display involving multiple layers of the calorimeter along with their distinct granularities.
Software Aids In Graphical Depiction Of Flow Data
NASA Technical Reports Server (NTRS)
Stegeman, J. D.
1995-01-01
Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).
Guedry, F E; Benson, A J; Moore, H J
1982-06-01
Visual search within a head-fixed display consisting of a 12 X 12 digit matrix is degraded by whole-body angular oscillation at 0.02 Hz (+/- 155 degrees/s peak velocity), and signs and symptoms of motion sickness are prominent in a number of individuals within a 5-min exposure. Exposure to 2.5 Hz (+/- 20 degrees/s peak velocity) produces equivalent degradation of the visual search task, but does not produce signs and symptoms of motion sickness within a 5-min exposure.
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert B.
1994-01-01
Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.
Splatterplots: overcoming overdraw in scatter plots.
Mayorga, Adrian; Gleicher, Michael
2013-09-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Splatterplots: Overcoming Overdraw in Scatter Plots
Mayorga, Adrian; Gleicher, Michael
2014-01-01
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen. PMID:23846097
Splatterplots: Overcoming Overdraw in Scatter Plots.
Mayorga, Adrian; Gleicher, Michael
2013-03-20
We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the dataset as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.
Volumetric 3D display using a DLP projection engine
NASA Astrophysics Data System (ADS)
Geng, Jason
2012-03-01
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
ComVisMD - compact visualization of multidimensional data: experimenting with cricket players data
NASA Astrophysics Data System (ADS)
Dandin, Shridhar B.; Ducassé, Mireille
2018-03-01
Database information is multidimensional and often displayed in tabular format (row/column display). Presented in aggregated form, multidimensional data can be used to analyze the records or objects. Online Analytical database Processing (OLAP) proposes mechanisms to display multidimensional data in aggregated forms. A choropleth map is a thematic map in which areas are colored in proportion to the measurement of a statistical variable being displayed, such as population density. They are used mostly for compact graphical representation of geographical information. We propose a system, ComVisMD inspired by choropleth map and the OLAP cube to visualize multidimensional data in a compact way. ComVisMD displays multidimensional data like OLAP Cube, where we are mapping an attribute a (first dimension, e.g. year started playing cricket) in vertical direction, object coloring based on b (second dimension, e.g. batting average), mapping varying-size circles based on attribute c (third dimension, e.g. highest score), mapping numbers based on attribute d (fourth dimension, e.g. matches played). We illustrate our approach on cricket players data, namely on two tables Country and Player. They have a large number of rows and columns: 246 rows and 17 columns for players of one country. ComVisMD’s visualization reduces the size of the tabular display by a factor of about 4, allowing users to grasp more information at a time than the bare table display.
Computer vision syndrome: A review.
Gowrisankaran, Sowjanya; Sheedy, James E
2015-01-01
Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.
Event Display for the Visualization of CMS Events
NASA Astrophysics Data System (ADS)
Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.
2011-12-01
During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.
Heading perception in patients with advanced retinitis pigmentosa
NASA Technical Reports Server (NTRS)
Li, Li; Peli, Eli; Warren, William H.
2002-01-01
PURPOSE: We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. METHODS: Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RESULTS: RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. CONCLUSIONS: RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
Visualizing Sound: Demonstrations to Teach Acoustic Concepts
NASA Astrophysics Data System (ADS)
Rennoll, Valerie
Interference, a phenomenon in which two sound waves superpose to form a resultant wave of greater or lower amplitude, is a key concept when learning about the physics of sound waves. Typical interference demonstrations involve students listening for changes in sound level as they move throughout a room. Here, new tools are developed to teach this concept that provide a visual component, allowing individuals to see changes in sound level on a light display. This is accomplished using a microcontroller that analyzes sound levels collected by a microphone and displays the sound level in real-time on an LED strip. The light display is placed on a sliding rail between two speakers to show the interference occurring between two sound waves. When a long-exposure photograph is taken of the light display being slid from one end of the rail to the other, a wave of the interference pattern can be captured. By providing a visual component, these tools will help students and the general public to better understand interference, a key concept in acoustics.
Heading perception in patients with advanced retinitis pigmentosa.
Li, Li; Peli, Eli; Warren, William H
2002-09-01
We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
Vergence–accommodation conflicts hinder visual performance and cause visual fatigue
Hoffman, David M.; Girshick, Ahna R.; Akeley, Kurt; Banks, Martin S.
2010-01-01
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues—accommodation and blur in the retinal image—specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces one’s ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays. PMID:18484839
Randolph, Susan A
2017-07-01
With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.
Ivanova, Maria V.; Hallowell, Brooke
2017-01-01
Purpose Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. Method Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. Results PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. Conclusion When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics. PMID:28520866
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
NASA Astrophysics Data System (ADS)
Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo
2012-02-01
As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.
2015-01-01
A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
The social computing room: a multi-purpose collaborative visualization environment
NASA Astrophysics Data System (ADS)
Borland, David; Conway, Michael; Coposky, Jason; Ginn, Warren; Idaszak, Ray
2010-01-01
The Social Computing Room (SCR) is a novel collaborative visualization environment for viewing and interacting with large amounts of visual data. The SCR consists of a square room with 12 projectors (3 per wall) used to display a single 360-degree desktop environment that provides a large physical real estate for arranging visual information. The SCR was designed to be cost-effective, collaborative, configurable, widely applicable, and approachable for naive users. Because the SCR displays a single desktop, a wide range of applications is easily supported, making it possible for a variety of disciplines to take advantage of the room. We provide a technical overview of the room and highlight its application to scientific visualization, arts and humanities projects, research group meetings, and virtual worlds, among other uses.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Morais, Maurício; Campello, Maria P C; Xavier, Catarina; Heemskerk, Johannes; Correia, João D G; Lahoutte, Tony; Caveliers, Vicky; Hernot, Sophie; Santos, Isabel
2014-11-19
Current methods for sentinel lymph node (SLN) mapping involve the use of radioactivity detection with technetium-99m sulfur colloid and/or visually guided identification using a blue dye. To overcome the kinetic variations of two individual imaging agents through the lymphatic system, we report herein on two multifunctional macromolecules, 5a and 6a, that contain a radionuclide ((99m)Tc or (68)Ga) and a near-infrared (NIR) reporter for pre- and/or intraoperative SLN mapping by nuclear and NIR optical imaging techniques. Both bimodal probes are dextran-based polymers (10 kDa) functionalized with pyrazole-diamine (Pz) or 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelating units for labeling with fac-[(99m)Tc(CO)3](+) or (68)Ga(III), respectively, mannose units for receptor targeting, and NIR fluorophore units for optical imaging. The probes allowed a clear visualization of the popliteal node by single-photon emission computed tomography (SPECT/CT) or positron emission tomography (PET/CT), as well as real-time optically guided excision. Biodistribution studies confirmed that both macromolecules present a significant accumulation in the popliteal node (5a: 3.87 ± 0.63% IA/organ; 6a: 1.04 ± 0.26% IA/organ), with minimal spread to other organs. The multifunctional nanoplatforms display a popliteal extraction efficiency >90%, highlighting their potential to be further explored as dual imaging agents.
2003-02-01
Ververs and Wickens, 1998; Wickens and Long, 1995; Yeh, Wickens, and Seagull , 1999) showed that some tasks do not allow for the near domain (symbology...Wickens, C. D., and Seagull , F. J. (1999). Target cueing in visual search: The effects of conformal and display location on the allocation of visual attention. Human Factors, 41(4), 524- 542.
ERIC Educational Resources Information Center
Wilkinson, Krista M.; O'Neill, Tara; McIlvane, William J.
2014-01-01
Purpose: Many individuals with communication impairments use aided augmentative and alternative communication (AAC) systems involving letters, words, or line drawings that rely on the visual modality. It seems reasonable to suggest that display design should incorporate information about how users attend to and process visual information. The…
Encoding strategies in self-initiated visual working memory.
Magen, Hagit; Berger-Mandelbaum, Anat
2018-06-11
During a typical day, visual working memory (VWM) is recruited to temporarily maintain visual information. Although individuals often memorize external visual information provided to them, on many other occasions they memorize information they have constructed themselves. The latter aspect of memory, which we term self-initiated WM, is prevalent in everyday behavior but has largely been overlooked in the research literature. In the present study we employed a modified change detection task in which participants constructed the displays they memorized, by selecting three or four abstract shapes or real-world objects and placing them at three or four locations in a circular display of eight locations. Half of the trials included identical targets that participants could select. The results demonstrated consistent strategies across participants. To enhance memory performance, participants reported selecting abstract shapes they could verbalize, but they preferred real-world objects with distinct visual features. Furthermore, participants constructed structured memory displays, most frequently based on the Gestalt organization cue of symmetry, and to a lesser extent on cues of proximity and similarity. When identical items were selected, participants mostly placed them in close proximity, demonstrating the construction of configurations based on the interaction between several Gestalt cues. The present results are consistent with recent findings in VWM, showing that memory for visual displays based on Gestalt organization cues can benefit VWM, suggesting that individuals have access to metacognitive knowledge on the benefit of structure in VWM. More generally, this study demonstrates how individuals interact with the world by actively structuring their surroundings to enhance performance.
NASA Astrophysics Data System (ADS)
Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz
1998-04-01
An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.
NASA Technical Reports Server (NTRS)
Post, R. B.; Welch, R. B.
1996-01-01
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
An Electronic Pressure Profile Display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
An electronic pressure profile display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
NASA Astrophysics Data System (ADS)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
2017-10-01
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.
View-Dependent Streamline Deformation and Exploration
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung
2016-01-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061
Computation and visualization of uncertainty in surgical navigation.
Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A
2014-09-01
Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.
Change blindness and visual memory: visual representations get rich and act poor.
Varakin, D Alexander; Levin, Daniel T
2006-02-01
Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2015-03-03
A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2015-11-10
A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.
View-Dependent Streamline Deformation and Exploration.
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung
2016-07-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.
Visual Merchandising through Display: Advertising Services Occupations.
ERIC Educational Resources Information Center
Maurer, Nelson S.
The increasing use of displays by businessmen is creating a demand for display workers. This demand may be met by preparing high school students to enter the field of display. Additional workers might be recruited by offering adult training programs for individuals working within the stores. For this purpose a curriculum guide has been developed…
Predicted Weather Display and Decision Support Interface for Flight Deck
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Inventor); Wong, Dominic G. (Inventor); Koteskey, Robert W. (Inventor); Wu, Shu-Chieh (Inventor)
2017-01-01
A system and method for providing visual depictions of a predictive weather forecast for in-route vehicle trajectory planning. The method includes displaying weather information on a graphical display, displaying vehicle position information on the graphical display, selecting a predictive interval, displaying predictive weather information for the predictive interval on the graphical display, and displaying predictive vehicle position information for the predictive interval on the graphical display, such that the predictive vehicle position information is displayed relative to the predictive weather information, for in-route trajectory planning.
Janosik, Elzbieta; Grzesik, Jan
2003-01-01
The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R
2018-05-16
A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).
Monkey Pulvinar Neurons Fire Differentially to Snake Postures
Le, Quan Van; Isbell, Lynne A.; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S.; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems. PMID:25479158
Visual task performance using a monocular see-through head-mounted display (HMD) while walking.
Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka
2013-12-01
A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Toward Head-Worn Displays for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence (Lance) J., III; Arthur, Jarvis J. (Trey); Bailey, Randall E.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.
2015-01-01
The Next Generation Air Transportation System represents an envisioned transformation to the U.S. air transportation system that includes an "equivalent visual operations" (EVO) concept, intended to achieve the safety and operational tempos of Visual Flight Rules (VFR) operations independent of visibility conditions. Today, Federal Aviation Administration regulations provide for the use of an Enhanced Flight Visual System (EFVS) as "operational credit" to conduct approach operations below traditional minima otherwise prohibited. An essential element of an EFVS is the Head-Up Display (HUD). NASA has conducted a substantial amount of research investigating the use of HUDs for operational landing "credit", and current efforts are underway to enable manually flown operations as low as 1000 feet Runway Visual Range (RVR). Title 14 CFR 91.175 describes the use of EFVS and the operational credit that may be obtained with airplane equipage of a HUD combined with Enhanced Vision (EV) while also offering the potential use of an “equivalent” display in lieu of the HUD. A Head-Worn Display (HWD) is postulated to provide the same, or better, safety and operational benefits as current HUD-equipped aircraft but for potentially more aircraft and for lower cost. A high-fidelity simulation was conducted that examined the efficacy of HWDs as "equivalent" displays. Twelve airline flight crews conducted 1000 feet RVR approach and 300 feet RVR departure operations using either a HUD or HWD, both with simulated Forward Looking Infra-Red cameras. The paper shall describe (a) quantitative and qualitative results, (b) a comparative evaluation of these findings with prior NASA HUD studies, and (c) describe current research efforts for EFVS to provide for a comprehensive EVO capability.
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
A spatio-temporal model of the human observer for use in display design
NASA Astrophysics Data System (ADS)
Bosman, Dick
1989-08-01
A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.
The Role of Color in Search Templates for Real-world Target Objects.
Nako, Rebecca; Smith, Tim J; Eimer, Martin
2016-11-01
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Wade, Nicholas J
2008-01-01
The art of visual communication is not restricted to the fine arts. Scientists also apply art in communicating their ideas graphically. Diagrams of anatomical structures, like the eye and visual pathways, and figures displaying specific visual phenomena have assisted in the communication of visual ideas for centuries. It is often the case that the development of a discipline can be traced through graphical representations and this is explored here in the context of concepts of visual science. As with any science, vision can be subdivided in a variety of ways. The classification adopted is in terms of optics, anatomy, and visual phenomena; each of these can in turn be further subdivided. Optics can be considered in terms of the nature of light and its transmission through the eye. Understanding of the gross anatomy of the eye and visual pathways was initially dependent upon the skills of the anatomist whereas microanatomy relied to a large extent on the instruments that could resolve cellular detail, allied to the observational skills of the microscopist. Visual phenomena could often be displayed on the printed page, although novel instruments expanded the scope of seeing, particularly in the nineteenth century.
A multi-mode manipulator display system for controlling remote robotic systems
NASA Technical Reports Server (NTRS)
Massimino, Michael J.; Meschler, Michael F.; Rodriguez, Alberto A.
1994-01-01
The objective and contribution of the research presented in this paper is to provide a Multi-Mode Manipulator Display System (MMDS) to assist a human operator with the control of remote manipulator systems. Such systems include space based manipulators such as the space shuttle remote manipulator system (SRMS) and future ground controlled teleoperated and telescience space systems. The MMDS contains a number of display modes and submodes which display position control cues position data in graphical formats, based primarily on manipulator position and joint angle data. Therefore the MMDS is not dependent on visual information for input and can assist the operator especially when visual feedback is inadequate. This paper provides descriptions of the new modes and experiment results to date.
Comparative evaluation of monocular augmented-reality display for surgical microscopes.
Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N
2012-01-01
Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.
ERIC Educational Resources Information Center
Olivers, Christian N. L.
2009-01-01
An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
NASA Technical Reports Server (NTRS)
1977-01-01
A preliminary design for a helicopter/VSTOL wide angle simulator image generation display system is studied. The visual system is to become part of a simulator capability to support Army aviation systems research and development within the near term. As required for the Army to simulate a wide range of aircraft characteristics, versatility and ease of changing cockpit configurations were primary considerations of the study. Due to the Army's interest in low altitude flight and descents into and landing in constrained areas, particular emphasis is given to wide field of view, resolution, brightness, contrast, and color. The visual display study includes a preliminary design, demonstrated feasibility of advanced concepts, and a plan for subsequent detail design and development. Analysis and tradeoff considerations for various visual system elements are outlined and discussed.
Similarities in human visual and declared measures of preference for opposite-sex faces.
Griffey, Jack A F; Little, Anthony C
2014-01-01
Facial appearance in humans is associated with attraction and mate choice. Numerous studies have identified that adults display directional preferences for certain facial traits including symmetry, averageness, and sexually dimorphic traits. Typically, studies measuring human preference for these traits examine declared (e.g., choice or ratings of attractiveness) or visual preferences (e.g., looking time) of participants. However, the extent to which visual and declared preferences correspond remains relatively untested. In order to evaluate the relationship between these measures we examined visual and declared preferences displayed by men and women for opposite-sex faces manipulated across three dimensions (symmetry, averageness, and masculinity) and compared preferences from each method. Results indicated that participants displayed significant visual and declared preferences for symmetrical, average, and appropriately sexually dimorphic faces. We also found that declared and visual preferences correlated weakly but significantly. These data indicate that visual and declared preferences for manipulated facial stimuli produce similar directional preferences across participants and are also correlated with one another within participants. Both methods therefore may be considered appropriate to measure human preferences. However, while both methods appear likely to generate similar patterns of preference at the sample level, the weak nature of the correlation between visual and declared preferences in our data suggests some caution in assuming visual preferences are the same as declared preferences at the individual level. Because there are positive and negative factors in both methods for measuring preference, we suggest that a combined approach is most useful in outlining population level preferences for traits.
Visual acuity and visual skills in Malaysian children with learning disabilities
Muzaliha, Mohd-Nor; Nurhamiza, Buang; Hussein, Adil; Norabibas, Abdul-Rani; Mohd-Hisham-Basrun, Jaafar; Sarimah, Abdullah; Leo, Seo-Wei; Shatriah, Ismail
2012-01-01
Background: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia. Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects. Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40), 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged. Conclusion: Although their visual acuity was satisfactory, nearly 30% of the children displayed accommodation problems including convergence insufficiency, poor accommodation, and accommodative infacility. Convergence and divergence recovery are the most affected visual skills in children with learning disabilities in Malaysia. PMID:23055674
Kesavachandran, C; Rastogi, S K; Das, Mohan; Khan, Asif M
2006-07-01
Workers in information technology (IT)-enabled services like business process outsourcing and call centers working with visual display units are reported to have various health and psycho-social disorders. Evidence from previously published studies in peer- reviewed journals and internet sources were examined to explore health disorders and psycho-social problems among personnel employed in IT-based services, for a systematic review on the topic. In addition, authors executed a questionnaire- based pilot study. The available literature and the pilot study, both suggest health disorders and psychosocial problems among workers of business process outsourcing. The details are discussed in the review.
Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E
2011-04-29
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.
Laha, Bireswar; Bowman, Doug A; Socha, John J
2014-04-01
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Website Designs for Communicating About Chemicals in Cigarette Smoke.
Lazard, Allison J; Byron, M Justin; Vu, Huyen; Peters, Ellen; Schmidt, Annie; Brewer, Noel T
2017-12-13
The Family Smoking Prevention and Tobacco Control Act requires the US government to inform the public about the quantities of toxic chemicals in cigarette smoke. A website can accomplish this task efficiently, but the site's user interface must be usable to benefit the general public. We conducted online experiments with national convenience samples of 1,451 US adult smokers and nonsmokers to examine the impact of four interface display elements: the chemicals, their associated health effects, quantity information, and a visual risk indicator. Outcomes were perceptions of user experience (perceived clarity and usability), motivation (willingness to use), and potential impact (elaboration about the harms of smoking). We found displaying health effects as text with icons, providing quantity information for chemicals (e.g., ranges), and showing a visual risk indicator all improved the user experience of a webpage about chemicals in cigarette smoke (all p < .05). Displaying a combination of familiar and unfamiliar chemicals, providing quantity information for chemicals, and showing a visual risk indicator all improved motivation to use the webpage (all p < .05). Displaying health effects or quantity information increased the potential impact of the webpage (all p < .05). Overall, interface designs displaying health effects of chemicals in cigarette smoke as text with icons and with a visual risk indicator had the greatest impact on the user experience, motivation, and potential impact of the website. Our findings provide guidance for accessible website designs that can inform consumers about the toxic chemicals in cigarette smoke.
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Immersive Visualization of the Solid Earth
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.
Guidelines for the Use of Color in ATC Displays
DOT National Transportation Integrated Search
1999-06-01
Color is probably the most effective, compelling, and attractive method available for coding visual information on a display. However, caution must be used in the application of color to displays for air traffic control (ATC), because it is easy to d...
The role of lightness, hue and saturation in feature-based visual attention.
Stuart, Geoffrey W; Barsdell, Wendy N; Day, Ross H
2014-03-01
Visual attention is used to select part of the visual array for higher-level processing. Visual selection can be based on spatial location, but it has also been demonstrated that multiple locations can be selected simultaneously on the basis of a visual feature such as color. One task that has been used to demonstrate feature-based attention is the judgement of the symmetry of simple four-color displays. In a typical task, when symmetry is violated, four squares on either side of the display do not match. When four colors are involved, symmetry judgements are made more quickly than when only two of the four colors are involved. This indicates that symmetry judgements are made one color at a time. Previous studies have confounded lightness, hue, and saturation when defining the colors used in such displays. In three experiments, symmetry was defined by lightness alone, lightness plus hue, or by hue or saturation alone, with lightness levels randomised. The difference between judgements of two- and four-color asymmetry was maintained, showing that hue and saturation can provide the sole basis for feature-based attentional selection. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Sirota, Miroslav; Kostovičová, Lenka; Juanchich, Marie
2014-08-01
Knowing which properties of visual displays facilitate statistical reasoning bears practical and theoretical implications. Therefore, we studied the effect of one property of visual diplays - iconicity (i.e., the resemblance of a visual sign to its referent) - on Bayesian reasoning. Two main accounts of statistical reasoning predict different effect of iconicity on Bayesian reasoning. The ecological-rationality account predicts a positive iconicity effect, because more highly iconic signs resemble more individuated objects, which tap better into an evolutionary-designed frequency-coding mechanism that, in turn, facilitates Bayesian reasoning. The nested-sets account predicts a null iconicity effect, because iconicity does not affect the salience of a nested-sets structure-the factor facilitating Bayesian reasoning processed by a general reasoning mechanism. In two well-powered experiments (N = 577), we found no support for a positive iconicity effect across different iconicity levels that were manipulated in different visual displays (meta-analytical overall effect: log OR = -0.13, 95% CI [-0.53, 0.28]). A Bayes factor analysis provided strong evidence in favor of the null hypothesis-the null iconicity effect. Thus, these findings corroborate the nested-sets rather than the ecological-rationality account of statistical reasoning.
Robertson, Kayela; Schmitter-Edgecombe, Maureen
2017-01-01
Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.
Assessing GPS Constellation Resiliency in an Urban Canyon Environment
2015-03-26
Taipei, Taiwan as his area of interest. His GPS constellation is modeled in the Satellite Toolkit ( STK ) where augmentation satellites can be added and...interaction. SEAS also provides a visual display of the simulation which is useful for verification and debugging portions of the analysis. Furthermore...entire system. Interpreting the model is aided by the visual display of the agents moving in the region of inter- est. Furthermore, SEAS collects
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Training Performance of Laparoscopic Surgery in Two- and Three-Dimensional Displays.
Lin, Chiuhsiang Joe; Cheng, Chih-Feng; Chen, Hung-Jen; Wu, Kuan-Ying
2017-04-01
This research investigated differences in the effects of a state-of-art stereoscopic 3-dimensional (3D) display and a traditional 2-dimensional (2D) display in simulated laparoscopic surgery over a longer duration than in previous publications and studied the learning effects of the 2 display systems on novices. A randomized experiment with 2 factors, image dimensions and image sequence, was conducted to investigate differences in the mean movement time, the mean error frequency, NASA-TLX cognitive workload, and visual fatigue in pegboard and circle-tracing tasks. The stereoscopic 3D display had advantages in mean movement time ( P < .001 and P = .002) and mean error frequency ( P = .010 and P = .008) in both the tasks. There were no significant differences in the objective visual fatigue ( P = .729 and P = .422) and in the NASA-TLX ( P = .605 and P = .937) cognitive workload between the 3D and the 2D displays on both the tasks. For the learning effect, participants who used the stereoscopic 3D display first had shorter mean movement time in the 2D display environment on both the pegboard ( P = .011) and the circle-tracing ( P = .017) tasks. The results of this research suggest that a stereoscopic system would not result in higher objective visual fatigue and cognitive workload than a 2D system, and it might reduce the performance time and increase the precision of surgical operations. In addition, learning efficiency of the stereoscopic system on the novices in this study demonstrated its value for training and education in laparoscopic surgery.
Experiments using electronic display information in the NASA terminal configured vehicle
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
The results of research experiments concerning pilot display information requirements and visualization techniques for electronic display systems are presented. Topics deal with display related piloting tasks in flight controls for approach-to-landing, flight management for the descent from cruise, and flight operational procedures considering the display of surrounding air traffic. Planned research of advanced integrated display formats for primary flight control throughout the various phases of flight is also discussed.
Dynamic lens and monovision 3D displays to improve viewer comfort.
Johnson, Paul V; Parnell, Jared Aq; Kim, Joohwan; Saunter, Christopher D; Love, Gordon D; Banks, Martin S
2016-05-30
Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
Helmet-mounted displays in long-range-target visual acquisition
NASA Astrophysics Data System (ADS)
Wilkins, Donald F.
1999-07-01
Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.
Visual communication - Information and fidelity. [of images
NASA Technical Reports Server (NTRS)
Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.
1993-01-01
This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862
NASA Technical Reports Server (NTRS)
Bourquin, K.; Palmer, E. A.; Cooper, G.; Gerdes, R. M.
1973-01-01
A preliminary assessment was made of the adequacy of a simple head up display (HUD) for providing vertical guidance for flying noise abatement and standard visual approaches in a jet transport. The HUD featured gyro-stabilized approach angle scales which display the angle of declination to any point on the ground and a horizontal flight path bar which aids the pilot in his control of the aircraft flight path angle. Thirty-three standard and noise abatement approaches were flown in a Boeing 747 aircraft equipped with a head up display. The HUD was also simulated in a research simulator. The simulator was used to familiarize the pilots with the display and to determine the most suitable way to use the HUD for making high capture noise abatement approaches. Preliminary flight and simulator data are presented and problem areas that require further investigation are identified.
White constancy method for mobile displays
NASA Astrophysics Data System (ADS)
Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo
2014-03-01
In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.
Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display.
Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan
2016-01-01
Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainee's field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainee's viewpoint.
NASA Astrophysics Data System (ADS)
Cooperstock, Jeremy R.; Wang, Guangyu
2009-02-01
We conducted a comparative study of different stereoscopic display modalities (head-mounted display, polarized projection, and multiview lenticular display) to evaluate their efficacy in supporting manipulation and understanding of 3D content, specifically, in the context of neurosurgical visualization. Our study was intended to quantify the differences in resulting task performance between these choices of display technology. The experimental configuration involved a segmented brain vasculature and a simulated tumor. Subjects were asked to manipulate the vasculature and a pen-like virtual probe in order to define a vessel-free path from cortical surface to the targeted tumor. Because of the anatomical complexity, defining such a path can be a challenging task. To evaluate the system, we quantified performance differences under three different stereoscopic viewing conditions. Our results indicate that, on average, participants achieved best performance using polarized projection, and worst with the multiview lenticular display. These quantitative measurements were further reinforced by the subjects' responses to our post-test questionnaire regarding personal preferences.
Task demands determine comparison strategy in whole probe change detection.
Udale, Rob; Farrell, Simon; Kent, Chris
2018-05-01
Detecting a change in our visual world requires a process that compares the external environment (test display) with the contents of memory (study display). We addressed the question of whether people strategically adapt the comparison process in response to different decision loads. Study displays of 3 colored items were presented, followed by 'whole-display' probes containing 3 colored shapes. Participants were asked to decide whether any probed items contained a new feature. In Experiments 1-4, irrelevant changes to the probed item's locations or feature bindings influenced memory performance, suggesting that participants employed a comparison process that relied on spatial locations. This finding occurred irrespective of whether participants were asked to decide about the whole display, or only a single cued item within the display. In Experiment 5, when the base-rate of changes in the nonprobed items increased (increasing the incentive to use the cue effectively), participants were not influenced by irrelevant changes in location or feature bindings. In addition, we observed individual differences in the use of spatial cues. These results suggest that participants can flexibly switch between spatial and nonspatial comparison strategies, depending on interactions between individual differences and task demand factors. These findings have implications for models of visual working memory that assume that the comparison between study and test obligatorily relies on accessing visual features via their binding to location. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Data Images and Other Graphical Displays for Directional Data
NASA Technical Reports Server (NTRS)
Morphet, Bill; Symanzik, Juergen
2005-01-01
Vectors, axes, and periodic phenomena have direction. Directional variation can be expressed as points on a unit circle and is the subject of circular statistics, a relatively new application of statistics. An overview of existing methods for the display of directional data is given. The data image for linear variables is reviewed, then extended to directional variables by displaying direction using a color scale composed of a sequence of four or more color gradients with continuity between sequences and ordered intuitively in a color wheel such that the color of the 0deg angle is the same as the color of the 360deg angle. Cross over, which arose in automating the summarization of historical wind data, and color discontinuity resulting from the use a single color gradient in computational fluid dynamics visualization are eliminated. The new method provides for simultaneous resolution of detail on a small scale and overall structure on a large scale. Example circular data images are given of a global view of average wind direction of El Nino periods, computed rocket motor internal combustion flow, a global view of direction of the horizontal component of earth's main magnetic field on 9/15/2004, and Space Shuttle solid rocket motor nozzle vectoring.
Aging and feature search: the effect of search area.
Burton-Danner, K; Owsley, C; Jackson, G R
2001-01-01
The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.
Superimposition, symbology, visual attention, and the head-up display
NASA Technical Reports Server (NTRS)
Martin-Emerson, R.; Wickens, C. D.
1997-01-01
In two experiments we examined a number of related factors postulated to influence head-up display (HUD) performance. We addressed the benefit of reduced scanning and the cost of increasing the number of elements in the visual field by comparing a superimposed HUD with an identical display in a head-down position in varying visibility conditions. We explored the extent to which the characteristics of HUD symbology support a division of attention by contrasting conformal symbology (which links elements of the display image to elements of the far domain) with traditional instrument landing system (ILS) symbology. Together the two experiments provide strong evidence that minimizing scanning between flight instruments and the far domain contributes substantially to the observed HUD performance advantage. Experiment 1 provides little evidence for a performance cost attributable to visual clutter. In Experiment 2 the pattern of differences in lateral tracking error between conformal and traditional ILS symbology supports the hypothesis that, to the extent that the symbology forms an object with the far domain, attention may be divided between the superimposed image and its counterpart in the far domain.
NASA Technical Reports Server (NTRS)
Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven
2005-01-01
The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.
Onboard System Evaluation of Rotors Vibration, Engines (OBSERVE) monitoring System
1992-07-01
consists of a Data Acquisiiton Unit (DAU), Control and Display Unit ( CADU ), Universal Tracking Devices (UTD), Remote Cockpit Display (RCD) and a PC...and Display Unit ( CADU ) - The CADU provides data storage and a graphical user interface neccesary to display both the measured data and diagnostic...information. The CADU has an interface to a Credit Card Memory (CCM) which operates similar to a disk drive, allowing the storage of data and programs. The
Flight Simulator Visual-Display Delay Compensation
NASA Technical Reports Server (NTRS)
Crane, D. Francis
1981-01-01
A piloted aircraft can be viewed as a closed-loop man-machine control system. When a simulator pilot is performing a precision maneuver, a delay in the visual display of aircraft response to pilot-control input decreases the stability of the pilot-aircraft system. The less stable system is more difficult to control precisely. Pilot dynamic response and performance change as the pilot attempts to compensate for the decrease in system stability. The changes in pilot dynamic response and performance bias the simulation results by influencing the pilot's rating of the handling qualities of the simulated aircraft. The study reported here evaluated an approach to visual-display delay compensation. The objective of the compensation was to minimize delay-induced change in pilot performance and workload, The compensation was effective. Because the compensation design approach is based on well-established control-system design principles, prospects are favorable for successful application of the approach in other simulations.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1993-01-01
The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.
ViSBARD: Visual System for Browsing, Analysis and Retrieval of Data
NASA Astrophysics Data System (ADS)
Roberts, D. Aaron; Boller, Ryan; Rezapkin, V.; Coleman, J.; McGuire, R.; Goldstein, M.; Kalb, V.; Kulkarni, R.; Luckyanova, M.; Byrnes, J.; Kerbel, U.; Candey, R.; Holmes, C.; Chimiak, R.; Harris, B.
2018-04-01
ViSBARD interactively visualizes and analyzes space physics data. It provides an interactive integrated 3-D and 2-D environment to determine correlations between measurements across many spacecraft. It supports a variety of spacecraft data products and MHD models and is easily extensible to others. ViSBARD provides a way of visualizing multiple vector and scalar quantities as measured by many spacecraft at once. The data are displayed three-dimesionally along the orbits which may be displayed either as connected lines or as points. The data display allows the rapid determination of vector configurations, correlations between many measurements at multiple points, and global relationships. With the addition of magnetohydrodynamic (MHD) model data, this environment can also be used to validate simulation results with observed data, use simulated data to provide a global context for sparse observed data, and apply feature detection techniques to the simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric A. Wernert; William R. Sherman; Chris Eller
2012-03-01
We present a pair of open-recipe, affordably-priced, easy-to-integrate, and easy-to-use visualization systems. The IQ-wall is an ultra-resolution tiled display wall that scales up to 24 screens with a single PC. The IQ-station is a semi-immersive display system that utilizes commodity stereoscopic displays, lower cost tracking systems, and touch overlays. These systems have been designed to support a wide range of research, education, creative activities, and information presentations. They were designed to work equally well as stand-alone installations or as part of a larger distributed visualization ecosystem. We detail the hardware and software components of these systems, describe our deployments andmore » experiences in a variety of research lab and university environments, and share our insights for effective support and community development.« less
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
NASA Technical Reports Server (NTRS)
Spirkovska, Liljana (Inventor)
2006-01-01
Method and system for automatically displaying, visually and/or audibly and/or by an audible alarm signal, relevant weather data for an identified aircraft pilot, when each of a selected subset of measured or estimated aviation situation parameters, corresponding to a given aviation situation, has a value lying in a selected range. Each range for a particular pilot may be a default range, may be entered by the pilot and/or may be automatically determined from experience and may be subsequently edited by the pilot to change a range and to add or delete parameters describing a situation for which a display should be provided. The pilot can also verbally activate an audible display or visual display of selected information by verbal entry of a first command or a second command, respectively, that specifies the information required.
The design of electronic map displays
NASA Technical Reports Server (NTRS)
Aretz, Anthony J.
1991-01-01
This paper presents a cognitive analysis of a pilot's navigation task and describes an experiment comparing a new map display that employs the principle of visual momentum with the two traditional approaches, track-up and north-up. The data show that the advantage of a track-up alignment is its congruence with the egocentered forward view; however, the inconsistency of the rotating display hinders development of a cognitive map. The stability of a north-up alignment aids the acquisition of a cognitive map, but there is a cost associated with the mental rotation of the display to a track-up alignment for tasks involving the ego-centered forward view. The data also show that the visual momentum design captures the benefits and reduces the costs associated with the two traditional approaches.
GlastCam: A Telemetry-Driven Spacecraft Visualization Tool
NASA Technical Reports Server (NTRS)
Stoneking, Eric T.; Tsai, Dean
2009-01-01
Developed for the GLAST project, which is now the Fermi Gamma-ray Space Telescope, GlastCam software ingests telemetry from the Integrated Test and Operations System (ITOS) and generates four graphical displays of geometric properties in real time, allowing visual assessment of the attitude, configuration, position, and various cross-checks. Four windows are displayed: a "cam" window shows a 3D view of the satellite; a second window shows the standard position plot of the satellite on a Mercator map of the Earth; a third window displays star tracker fields of view, showing which stars are visible from the spacecraft in order to verify star tracking; and the fourth window depicts
AH-64 IHADSS aviator vision experiences in Operation Iraqi Freedom
NASA Astrophysics Data System (ADS)
Hiatt, Keith L.; Rash, Clarence E.; Harris, Eric S.; McGilberry, William H.
2004-09-01
Forty AH-64 Apache aviators representing a total of 8564 flight hours and 2260 combat hours during Operation Iraqi Freedom and its aftermath were surveyed for their visual experiences with the AH-64's monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display in a combat environment. A major objective of this study was to determine if the frequencies of reports of visual complaints and illusions reported in the previous studies, addressing mostly benign training environments, differ in the more stressful combat environments. The most frequently reported visual complaints, both while and after flying, were visual discomfort and headache, which is consistent with previous studies. Frequencies of complaints after flying in the current study were numerically lower for all complaint types, but differences from previous studies are statistically significant only for visual discomfort and disorientation (vertigo). With the exception of "brownout/whiteout," reports of degraded visual cues in the current study were numerically lower for all types, but statistically significant only for impaired depth perception, decreased field of view, and inadvertent instrumental meteorological conditions. This study also found statistically lower reports of all static and dynamic illusions (with one exception, disorientation). This important finding is attributed to the generally flat and featureless geography present in a large portion of the Iraqi theater and to the shift in the way that the aviators use the two disparate visual inputs presented by the IHADSS monocular design (i.e., greater use of both eyes as opposed to concentrating primarily on display imagery).
Blood pressure measurement and display system
NASA Technical Reports Server (NTRS)
Farkas, A. J.
1972-01-01
System is described that employs solid state circuitry to transmit visual display of patient's blood pressure. Response of sphygmomanometer cuff and microphone provide input signals. Signals and their amplitudes, from turn-on time to turn-off time, are continuously fed to data transmitter which transmits to display device.
Here's How To Make Better Graphs.
ERIC Educational Resources Information Center
Smith, Curtis A.
1997-01-01
Explains how to improve visual displays employed in school finance by examining a theoretical framework and applying it to the displays. Discusses and illustrates important display principles based on William Cleveland's ideas about decoding/encoding, length judgments, distance, detection, and superimposed curves; and Edward Tufte's work on data…
Teulings, H; Contreras-Vidal, J; Stelmach, G; Adler, C
2002-01-01
Objective: The ability to use visual feedback to control handwriting size was compared in patients with Parkinson's disease (PD), elderly people, and young adults to better understand factors playing a part in parkinsonian micrographia. Methods: The participants wrote sequences of eight cursive l loops with visual target sizes of 0.5 and 2 cm on a flat panel display digitiser which both recorded and displayed the pen movements. In the pre-exposure and postexposure conditions, the display digitiser showed the actual pen trace in real time and real size. In the distortion exposure conditions, the gain of the vertical dimension of the visual feedback was either reduced to 70% or enlarged to 140%. Results: The young controls showed a gradual visuomotor adaptation that compensated for the visual feedback distortions during the exposure conditions. They also showed significant after effects during the postexposure conditions. The elderly controls marginally corrected for the size distortions and showed small after effects. The patients with PD, however, showed no trial by trial adaptations or after effects but instead, a progressive amplification of the distortion effect in each individual trial. Conclusion: The young controls used visual feedback to update their visuomotor map. The elderly controls seemed to make little use of visual feedback. The patients with Parkinson's disease rely on the visual feedback of previous or of ongoing strokes to programme subsequent strokes. This recursive feedback may play a part in the progressive reductions in handwriting size found in parkinsonian micrographia. PMID:11861687
A computer graphics system for visualizing spacecraft in orbit
NASA Technical Reports Server (NTRS)
Eyles, Don E.
1989-01-01
To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.
On the efficacy of cinema, or what the visual system did not evolve to do
NASA Technical Reports Server (NTRS)
Cutting, James E.
1989-01-01
Spatial displays, and a constraint that they do not place on the use of spatial instruments are discussed. Much of the work done in visual perception by psychologists and by computer scientists has concerned displays that show the motion of rigid objects. Typically, if one assumes that objects are rigid, one can then proceed to understand how the constant shape of the object can be perceived (or computed) as it moves through space. The author maintains that photographs and cinema are visual displays that are also powerful forms of art. Their efficacy, in part, stems from the fact that, although viewpoint is constrained when composing them, it is not nearly so constrained when viewing them. It is obvious, according to the author, that human visual systems did not evolve to watch movies or look at photographs. Thus, what photographs and movies present must be allowed in the rule-governed system under which vision evolved. Machine-vision algorithms, to be applicable to human vision, should show the same types of tolerance.
Large-screen display industry: market and technology trends for direct view and projection displays
NASA Astrophysics Data System (ADS)
Castellano, Joseph A.; Mentley, David E.
1996-03-01
Large screen information displays are defined as dynamic electronic displays that can be viewed by more than one person and are at least 2-feet wide. These large area displays for public viewing provide convenience, entertainment, security, and efficiency to the viewers. There are numerous uses for large screen information displays including those in advertising, transportation, traffic control, conference room presentations, computer aided design, banking, and military command/control. A noticeable characteristic of the large screen display market is the interchangeability of display types. For any given application, the user can usually choose from at least three alternative technologies, and sometimes from many more. Some display types have features that make them suitable for specific applications due to temperature, brightness, power consumption, or other such characteristic. The overall worldwide unit consumption of large screen information displays of all types and for all applications (excluding consumer TV) will increase from 401,109 units in 1995 to 655,797 units in 2002. On a unit consumption basis, applications in business and education represent the largest share of unit consumption over this time period; in 1995, this application represented 69.7% of the total. The market (value of shipments) will grow from DOL3.1 billion in 1995 to DOL3.9 billion in 2002. The market will be dominated by front LCD projectors and LCD overhead projector plates.
Reimer, Christina B; Schubert, Torsten
2017-09-15
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
NASA Astrophysics Data System (ADS)
Rivera-Jacquez, Hector J.; Masunov, Artëm E.
2018-06-01
Development of two-photon fluorescent probes can aid in visualizing the cellular environment. Multi-chromophore systems display complex manifolds of electronic transitions, enabling their use for optical sensing applications. Time-Dependent Density Functional Theory (TDDFT) methods allow for accurate predictions of the optical properties. These properties are related to the electronic transitions in the molecules, which include two-photon absorption cross-sections. Here we use TDDFT to understand the mechanism of aza-crown based fluorescent probes for metals sensing applications. Our findings suggest changes in local excitation in the rhodol chromophore between unbound form and when bound to the metal analyte. These changes are caused by a charge transfer from the aza-crown group and pyrazol units toward the rhodol unit. Understanding this mechanism leads to an optimized design with higher two-photon excited fluorescence to be used in medical applications.
Rivera-Jacquez, Hector J; Masunov, Artëm E
2018-06-05
Development of two-photon fluorescent probes can aid in visualizing the cellular environment. Multi-chromophore systems display complex manifolds of electronic transitions, enabling their use for optical sensing applications. Time-Dependent Density Functional Theory (TDDFT) methods allow for accurate predictions of the optical properties. These properties are related to the electronic transitions in the molecules, which include two-photon absorption cross-sections. Here we use TDDFT to understand the mechanism of aza-crown based fluorescent probes for metals sensing applications. Our findings suggest changes in local excitation in the rhodol chromophore between unbound form and when bound to the metal analyte. These changes are caused by a charge transfer from the aza-crown group and pyrazol units toward the rhodol unit. Understanding this mechanism leads to an optimized design with higher two-photon excited fluorescence to be used in medical applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Yang, Yongqi; Guan, Lin; Gao, Guanghui
2018-04-25
Traditional optoelectronic devices without stretchable performance could be limited for substrates with irregular shape. Therefore, it is urgent to explore a new generation of flexible, stretchable, and low-cost intelligent vehicles as visual display and storage devices, such as hydrogels. In the investigation, a novel photochromic hydrogel was developed by introducing the negatively charged ammonium molybdate as a photochromic unit into polyacrylamide via ionic and covalent cross-linking. The hydrogel exhibited excellent properties of low cost, easy preparation, stretchable deformation, fatigue resistance, high transparency, and second-order response to external signals. Moreover, the photochromic and fading process of hydrogels could be precisely controlled and repeated under the irradiation of UV light and exposure of oxygen at different time and temperature. The photochromic hydrogel could be considered applied for artificial intelligence system, wearable healthcare device, and flexible memory device. Therefore, the strategy for designing a soft photochromic material would open a new direction to manufacture flexible and stretchable devices.
Correction techniques for depth errors with stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Holden, Anthony; Williams, Steven P.
1992-01-01
Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.
Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.
Orbán, Levente L; Chartier, Sylvain
2015-01-01
Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.
Visual ergonomics and computer work--is it all about computer glasses?
Jonsson, Christina
2012-01-01
The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)
2014-01-01
A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
LinkWinds: An Approach to Visual Data Analysis
NASA Technical Reports Server (NTRS)
Jacobson, Allan S.
1992-01-01
The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.
Educational Testing of an Auditory Display of Mars Gamma Ray Spectrometer Data
NASA Astrophysics Data System (ADS)
Keller, J. M.; Pompea, S. M.; Prather, E. E.; Slater, T. F.; Boynton, W. V.; Enos, H. L.; Quinn, M.
2003-12-01
A unique, alternative educational and public outreach product was created to investigate the use and effectiveness of auditory displays in science education. The product, which allows students to both visualize and hear seasonal variations in data detected by the Gamma Ray Spectrometer (GRS) aboard the Mars Odyssey spacecraft, consists of an animation of false-color maps of hydrogen concentrations on Mars along with a musical presentation, or sonification, of the same data. Learners can access this data using the visual false-color animation, the auditory false-pitch sonification, or both. Central to the development of this product is the question of its educational effectiveness and implementation. During the spring 2003 semester, three sections of an introductory astronomy course, each with ˜100 non-science undergraduates, were presented with one of three different exposures to GRS hydrogen data: one auditory, one visual, and one both auditory and visual. Student achievement data was collected through use of multiple-choice and open-ended surveys administered before, immediately following, and three and six weeks following the experiment. It was found that the three student groups performed equally well in their ability to perceive and interpret the data presented. Additionally, student groups exposed to the auditory display reported a higher interest and engagement level than the student group exposed to the visual data alone. Based upon this preliminary testing,we have made improvements to both the educational product and our evaluation protocol. This fall, we will conduct further testing with ˜100 additional students, half receiving auditory data and half receiving visual data, and we will conduct interviews with individual students as they interface with the auditory display. Through this process, we hope to further assess both learning and engagement gains associated with alternative and multi-modal representations of scientific data that extend beyond traditional visualization approaches. This work has been supported by the GRS Education and Public Outreach Program and the NASA Spacegrant Graduate Fellowship Program.
Accessibility of Home Blood Pressure Monitors for Blind and Visually Impaired People
Uslan, Mark M.; Burton, Darren M.; Wilson, Thomas E.; Taylor, Steven; Chertow, Bruce S.; Terry, Jack E.
2007-01-01
Background The prevalence of hypertension comorbid with diabetes is a significant health care issue. Use of the home blood pressure monitor (HBPM) for aiding in the control of hypertension is noteworthy because of benefits that accrue from following a home measurement regimen. To be usable by blind and visually impaired patients, HBPMs must have speech output to convey all screen information, an easily readable visual display, identifiable controls that are easy to use, and an accessible user manual. Methods Data on the physical aspects and the features and functions of nine Food and Drug Administration-approved HBPMs (eight of which were recommended by the British Hypertension Society) were tabulated and analyzed for usability by blind and visually impaired individuals. Video Electronics Standards Association standards were used to measure contrast modulation in the displays of the HBPMs. Ten persons who are blind or visually impaired and who have diabetes were surveyed to determine how they monitor their blood pressure and to learn their ideas for improvements in usability. Results Physical controls were found to be easy to identify, and operating procedures were found to be relatively simple on all of the HBPMs, but user manuals were either inaccessible or minimally accessible to blind persons. The two HBPMs that have speech output do not voice all of the information that is displayed on the screen. Some functions that are standard in the HBPMs without speech output, such as the feature for automatically setting cuff inflation volume and memory, were lacking in the HBPMs with speech output. These features were mentioned as desirable in interviews with legally blind persons who are diabetic and who monitor their blood pressure at home. Visual display output was large and adequate in all of the HBPMs. Michelson contrast for numeric digits in the HBPM displays was also measured, ranging from 55 to 75% for characters with dominant spatial frequency components lying in the range of 0.5–1.0 cycles/degree. Conclusions Home blood pressure monitors are easy-to-use devices that do not present accessibility barriers that are difficult to surmount, either technically or operationally. Two HBPMs with voice output were found to have a significant degree of accessibility, but they were not found to offer as many features as those HBPMs that were less accessible. Recommendations were made to improve accessibility, including the development of visual display standards that specify a minimally acceptable level of Michelson contrast. PMID:19888410
Khairat, Saif Sherif; Dukkipati, Aniesha; Lauria, Heather Alico; Bice, Thomas; Travers, Debbie; Carson, Shannon S
2018-05-31
Intensive Care Units (ICUs) in the United States admit more than 5.7 million people each year. The ICU level of care helps people with life-threatening illness or injuries and involves close, constant attention by a team of specially-trained health care providers. Delay between condition onset and implementation of necessary interventions can dramatically impact the prognosis of patients with life-threatening diagnoses. Evidence supports a connection between information overload and medical errors. A tool that improves display and retrieval of key clinical information has great potential to benefit patient outcomes. The purpose of this review is to synthesize research on the use of visualization dashboards in health care. The purpose of conducting this literature review is to synthesize previous research on the use of dashboards visualizing electronic health record information for health care providers. A review of the existing literature on this subject can be used to identify gaps in prior research and to inform further research efforts on this topic. Ultimately, this evidence can be used to guide the development, testing, and implementation of a new solution to optimize the visualization of clinical information, reduce clinician cognitive overload, and improve patient outcomes. Articles were included if they addressed the development, testing, implementation, or use of a visualization dashboard solution in a health care setting. An initial search was conducted of literature on dashboards only in the intensive care unit setting, but there were not many articles found that met the inclusion criteria. A secondary follow-up search was conducted to broaden the results to any health care setting. The initial and follow-up searches returned a total of 17 articles that were analyzed for this literature review. Visualization dashboard solutions decrease time spent on data gathering, difficulty of data gathering process, cognitive load, time to task completion, errors, and improve situation awareness, compliance with evidence-based safety guidelines, usability, and navigation. Researchers can build on the findings, strengths, and limitations of the work identified in this literature review to bolster development, testing, and implementation of novel visualization dashboard solutions. Due to the relatively few studies conducted in this area, there is plenty of room for researchers to test their solutions and add significantly to the field of knowledge on this subject. ©Saif Sherif Khairat, Aniesha Dukkipati, Heather Alico Lauria, Thomas Bice, Debbie Travers, Shannon S Carson. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 31.05.2018.
Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.
2009-01-01
Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.